All our infrastructure is hosted on Google Cloud, this includes:
178G SSD containing almost a years worth of fine grained
"featuresets": every 10-20 seconds across all currencies on poloniex we store computed features, projected price movements/volatility and backfilled data containing how accurate predictions of price and volatility where
We reduce this by not computing projections if there are no actual trades executing
Things we can do to reduce costs: backup/archive strategies e.g. to a google cloud storage bucket, compression, fewer indexes/storage, moving to another storage system is hand because postgres gives a lot of flexibility with SQL queries and community connectors e.g. Graphana.
Forecasts are all computed then written to firebase at once in bulk to reduce writes, Charts are live updated and firebase is also given to customers as an API which makes it very hard to migrate away from.
The frontend serving app shares code with the backend forecasting system and is written in Python3 so used the App Engine Dockerized Flexible environment, It also had to be that way at the time when hosted Postgres was a Beta project. The Frontend doesn't need to do much work in part thanks to CloudFlare :)
It could even be further optimised by switching to AppEngine standard environment but having to abandon Python3 makes it unappealing.
storing and downloading large datasets also imported for use in Google Data studio and Big Query.
This is just crazy cheap...
The main forecasting process is running on n1-highmem-2 (2 vCPUs, 13 GB memory) it uses a fairly memory heavy forecasting algorithm and utilizes a lot of data, this has been running with minimal hiccups with a strategy of being under supervisor aswell and having a manager process that restarts if it detects things aren't progressing quickly enough.
It downloads all the data from poloniex in parallel and makes the forecast across all pairs in a single process so only has two cpus because it needed the extra memory (no way to get that much memory without getting the extra cpu) its using the newer faster cpu architecture.
There are ways we can reduce the overhead by using a less memory intensive algorithm for forecasting or for example running quantization to compress a tensor-flow model
For any model it needs to adapt to market conditions which can be tricky to prove the model performs better on training, test sets and live data before deploying the new model if it successfully passes performance thresholds, currently the training/testing is done "offline" on a machine outside Google cloud and deployed from there so doesn't cost anything in a sense.
The background processes (payment accepting, email, filling in for older projections how accurate they where) has 2 vCPUs, 5.75 GB memory. This could be scaled down a bit when we are using a safer more decoupled system than running everything as a cron on a single machine, e.g. task queues ect. The background process could use a faster algorithm for computing rolling weighted averages of prices in the trades. Have had some problems with cron jobs not finishing in time and the machine becoming overloaded.
Paying for a Static iP? that was a relic from when we provided a graphana dashboard which is now available as something you can run yourself on our github project
this month so far:
|App Engine||Cloud Firestore Entity Writes||9,802,186.00 Count||$16.71|
|App Engine||Cloud Firestore Read Ops||2,621,494.00 Count||$0.79|
|App Engine||Flex Instance Core Hours||600.36 Hour||$31.58|
|App Engine||Flex Instance RAM||1,651.00 Gibibyte-hour||$11.72|
|Cloud SQL||HA Postgres DB custom CORE running in NA (with 30% promotional discount)||25.00 Day||$49.56|
|Cloud SQL||HA Postgres DB custom RAM running in NA (with 30% promotional discount)||2,250.00 Gibibyte-hour||$31.50|
|Cloud SQL||Storage PD Snapshot||813.91 Gibibyte-day||$2.10|
|Cloud SQL||Storage PD SSD for HA Postgres DB in Americas||4,771.35 Gibibyte-day||$52.33|
|Cloud Storage||Class A Request Multi-Regional Storage||2,640.00 Count||$0.01|
|Cloud Storage||Download Australia||305.61 Mebibyte||$0.06|
|Cloud Storage||Multi-Regional Storage US||7,627.83 Gibibyte-hour||$0.27|
|Compute Engine||Custom instance Core running in Americas||1,220.86 Hour||$40.50|
|Compute Engine||Custom instance Core running in Americas||Credit applied||-$9.49|
|Compute Engine||Custom instance Ram running in Americas||3,509.98 Gibibyte-hour||$15.61|
|Compute Engine||Custom instance Ram running in Americas||Credit applied||-$3.66|
|Compute Engine||Highmem Intel N1 2 VCPU running in Americas||610.43 Hour||$72.27|
|Compute Engine||Highmem Intel N1 2 VCPU running in Americas||Credit applied||-$16.94|
|Compute Engine||Network Inter Zone Egress||84.15 Gibibyte||$0.84|
|Compute Engine||Network Internet Egress from Americas to Americas||8,140.46 Mebibyte||$0.83|
|Compute Engine||Static Ip Charge||610.42 Hour||$6.09|
|Compute Engine||Storage PD Capacity||7,884.69 Gibibyte-day||$8.97|
|*Estimated charges before taxes, updated daily||Total: $311.67|
Checkout our product and join for a free day of cryptocurrency projections! https://bitbank.nz