<aside>
🚧
This document is a Work In Progress
</aside>
Component Designs
Processing Pipeline
Results Forest
Ingestion Engine
Storage Layer
API Access
Workstream Breakdown
The current plan is to build the new components as separate new engines and modules. This will allow us to work off master with minimal overhead.
Milestone 1:
This is all of the work needed to build the processing pipeline. Except for the last step, all of this can be parallelized.
- In-memory caches
- Execution Requester Refactor
- Indexer Refactor
- Refactor Tx Error message fetching and indexing
- Currently these are downloaded and indexed in the ingestion engine. (code)
- We can split this out to download within the download step, then index within the index step.
- Will need to come up with strategy for when grpc requests to ENs fail
- Processing Pipeline
- Use the updated download and indexing logic, plus in-memory caches
- Build a state machine that handles the lifecycle of a result
- Download Execution Data & Tx Error Messages
- Index Execution Data & Tx Error Messages
- Persist all data to the primary DB
- Consume state updates from the parent pipeline to decide whether or not to start each step
Milestone 2: