Introduction
It is no secret that the response of multiple countries following the outbreak of the COVID-19 pandemic has been abismal - and the UK is no exception. With £355 billion amounting in economic losses (OBR, 2021), a 4.8% rate of unemployment (ONS, 2021) and, perhaps most shockingly, over 128,000 deaths to this date (Gov.uk, 2021), the UK government has undoubtedly handled this crisis poorly, with two key government-led programmes standing out amongst the list of policy failures.
The Test and Trace programme, established to curb Coronavirus reproduction, aims to provide accessible testing as well as contact tracing to notify exposed individuals and instruct them to self-isolate. One major failure of the T&T programme has been the lack of a coherent data collection strategy for analysing setting-specific transmission; that is, the relative likelihood of contracting COVID in different locations, such as a hair dresser, a restaurant, etc. Given the dependence of the UK Tier System on our understanding of setting-specific transmission, this alone serves as an explanation of the latter’s failure, as settings grouped together often bear no resemblance in terms of their underlying transmission rates, resulting in unnecessary business closures and contributing to the economic hemorrhage.
Thus, we set out to formulate a data collection and analysis methodology that is both compatible with the current T&T programme and enables us to model and estimate setting-specific COVID-19 transmission rates, in the hope of guiding lockdown policies using a reliable, data-driven approach.
Our approach and assumptions
Data collection methodology
Starting out, our goal was to model a random vector of transmission rates for different settings. To estimate this model, we would require data that could be feasibly collected through the T&T programme. Therefore, we devised the following data collection strategy to accompany the probabilistic model developed below, making sure to prioritize its feasibility and scalability:
Data is to be collected from individuals in the T&T programme through a short survey of binary responses on whether or not they visited each one of locations in the last few days.
For each individual surveyed through T&T, the result of their COVID-19 antigen test is also observed.
Finally, a random survey is also sent out. This survey is identical to the one in (1.) but, since it is not distributed by T&T, no antigen test is taken and therefore no outcome is observed.
Model assumptions
In order for us to combine this data collection methodology with a statistical model that allows for inference on the estimated setting-specific transmission rates, we had to lay down the following assumptions:
- Multiple visits to a location in one week are rare enough to be ignored or in cases (such as supermarkets) where multiple visits are expected, the distribution of number of visits is tight.
Building a first-principles model
Given the data collection strategy detailed above, for each individual, we observe a binary vector , which indicates their responses on the location visits survey, and random variable which indicates if they tested positive for COVID-19.
The base model
We formulate our model generatively through the latent transmission vector , which follows a multivariate Bernoulli distribution with transmission probabilities dependent on the individual’s attendance to each setting and the respective setting-specific transmission rates. Furthermore, we capture the case of COVID transmission in a location not specified by the survey through the latent variable , which is analogous to , but parameterized by the underlying base transmission risk:
We denote whether or not the individual contracted COVID-19 with the binary variable and define it using the natural relationship with the aforementioned latent variables. Since transmission in any of the settings surveyed or in some other unaccounted location results in contraction, we define as the indicator of this scenario:
Curbing selection bias
The next challenge for our model was to account for selection bias in T&T survey observations. More specifically, infected individuals (who feel unwell and present symptoms) are more likely to get in contact with T&T and hence select into the survey, thus introducing bias into any transmission rate estimates. To mitigate this, we collect observations of from a random survey and define to indicate whether or not the observation came from T&T:
From there, we utilize to define the testing rates as the probability of getting tested conditional on being COVID-19 infected, and use these to weight down the transmission likelihood for our observations.
Addressing inaccurate tests
Similarly, we also account for false positive and negative antigen test results by defining the test sensitivity and specificity parameters through the conditional probability of testing positive given infected and testing negative given non-infected, respectively:
Hierarchical extension
We modeled the hierarchical nature of setting-specific transmission by grouping each setting into one of encompassing classes of similar settings: . For each class, we model transmission rates of member settings as draws from a logit-normal distribution, parameterized by the mean class transmission rate and the class transmission rate variance . It should be noted that :
Modelling policy interventions
Finally, we also wanted to model the effect of policy interventions on different setting classes to answer questions such as: “Does social distancing have different effects in cinemas than in restaurants?” To do this, we can introduce policy intervention parameters and model out their interactions with different setting transmission rates, provided our data collection methodology can be feasibly extended to collect the data necessary to pin down these additional parameters.
We exemplify this with the mask-wearing intervention, with the objective of estimating the different effects mask-wearing can have in different settings. To do this, we could extend our data collection survey to ask individuals about their mask-wearing habits, allowing us to define as an indicator for habitual mask-wearers:
This additional data then allows us to incorporate and estimate as the class-specific mas wearing impact on transmission rates:
Modeling policy interventions in this way allows for a clear-cut interpretation of the intervention effect parameter. More specifically, with a little algebra, it becomes evident that essentially acts as a multiplier effect on the average class transmission odds:
Simulating our data
Metadata
In order to be test out this model, the following metadata must be defined in relation to the application context:
- A vector containing the number of settings per class
- The number of setting classes considered in the model
- The total population
- The number of random surveys sent out
- The total number of T&T samples
In particular, it is worth highlighting that and are constrained by the following:
It is assumed that, in the typical case, both and are predetermined, and the authority applying the model gets to choose subject to the above. In our simulation, and are pre-set and is randomly determined given the simulated testing rates.
1 | #### METADATA #### |
Population ground truth
Since we our data collection strategy is (an ideal) hypothetical, to test our model we must first specify a ground truth for the population parameters and then simulate our data accordingly. To this end we sampled our population parameters as follows, making sure to try a range of values for these to ensure model robustness:
Parameter | Model Notation | Simulation | Model Priors |
---|---|---|---|
Setting transmission rates | Logit-normal distribution parameterized by class transmission rate mean and variance | ||
Mean class transmission rates | Beta distribution | ||
Class transmission rate variance | Inverse gamma distribution | ||
Base transmission rate | Beta distribution | ||
Class-specific mask impact | Normal with negative mean | ||
Testing rates | Beta distribution | ||
Test precision and recall rates | $\lambda_{+,,-} $ | Beta distribution | , with shape hyperparameters calibrated to match the antigen test sensitivity and specificity results from the Joint PHE Porton Down & University of Oxford SARS-CoV-2 test development and validation cell |
Fitting our Model
Our model in Stan
We implemented the code for our first-principles model using Stan, a probabilistic inference language compiled in C++. Stan allows users to carry out full Bayesian inference on statistical models via Markov Chain Monte Carlo (MCMC) sampling, and enables model specification in a block-like fashion. The main building blocks for our Stan model are:
- The data block, which specifies the type and dimensions of data used to train the model.
- The parameter block, which specifies all underlying statistical parameters of the model.
- The model block, which assigns parameter priors and constructs the model likelihood describing the joint probability of the observed data as a function of the parameters.
With computational feasibility in mind, we also coded a version of our model using the TensorFlow Probability framework, as this version supports the use of GPU’s and distributed computing capabilities to deliver greater computational power. For a more extensive review of the code produced, please refer to the project GitHub repository.
1 | model_code = """ |
Markov Chain Monte Carlo Posterior Sampling
In general, the aim of Bayesian inference is to derive the posterior distribution of our parameters by defining it mathematically using Bayes’ theorem as below, where is the parameter prior and is the model likelihood:
Unfortunately, this distribution in our model (as is the case for most Bayesian models) cannot be evaluated analytically given the highly-dimensional parameter space. To overcome this, Stan utilizes the NUTS algorithm (part of the general class of MCMC methods) to infer the posterior distribution by repeatedly drawing samples from it, even though its full closed-form characterization is unknown. Unlike regular Monte Carlo sampling, which draws independent samples, MCMC allows for ‘smarter’ sampling as it draws correlated samples from the stationary distribution of a Markov chain that’s proportional to the desired distribution. This allows for the sampling process to enter regions of high density much faster. For more details regarding how Stan achieves this using the NUTS algorithm, the interested reader should refer to this link.
At its core, there are three hyperparamters that need to be specified for MCMC sampling with the NUTS algorithm.
Firstly, the num_warmup
parameter is used to specify the number of samples that are discarded as burn-in. This is done in order to allow convergence onto the stationary distribution of the Markov chain. Secondly, the n_samples
parameter specifies the number of samples to be drawn from the distribution. Finally, n_chains
specifies the number of chains that should be constructed, as using more well-mixed chains increases robustness.
1 | ### MCMC SAMPLING HYPERPARAMETERS #### |
Results and conclusion
Visualizing posterior samples
After running our model, we computed the summary statistics of our posterior samples in the table below. Additionally, we visualize these in the subsequent traceplots, which show the distributions for all setting transmission rates, as well as the other model parameters, in the left, and the sampled values on the right. Overall, we can see that there was proper chain mixing given both the shape of our sampled value traces and the statistics (equal to one for all model parameters), which suggests that improper model parametrization is unlikely to be an issue.
mean | sd | hdi_3% | hdi_97% | mcse_mean | mcse_sd | ess_mean | ess_sd | ess_bulk | ess_tail | r_hat | |
---|---|---|---|---|---|---|---|---|---|---|---|
theta[0] | 0.250 | 0.111 | 0.042 | 0.444 | 0.002 | 0.001 | 4090.0 | 4090.0 | 3711.0 | 3200.0 | 1.0 |
theta[1] | 0.224 | 0.101 | 0.042 | 0.410 | 0.002 | 0.001 | 4116.0 | 4116.0 | 3688.0 | 3103.0 | 1.0 |
theta[2] | 0.212 | 0.098 | 0.040 | 0.390 | 0.001 | 0.001 | 4352.0 | 4352.0 | 3834.0 | 3492.0 | 1.0 |
theta[3] | 0.228 | 0.098 | 0.053 | 0.404 | 0.002 | 0.001 | 2733.0 | 2733.0 | 2720.0 | 3567.0 | 1.0 |
theta[4] | 0.231 | 0.099 | 0.066 | 0.420 | 0.002 | 0.001 | 2788.0 | 2787.0 | 2807.0 | 3854.0 | 1.0 |
theta[5] | 0.223 | 0.097 | 0.062 | 0.410 | 0.002 | 0.001 | 2890.0 | 2877.0 | 2818.0 | 3744.0 | 1.0 |
theta[6] | 0.267 | 0.108 | 0.080 | 0.471 | 0.002 | 0.001 | 2956.0 | 2956.0 | 2817.0 | 3703.0 | 1.0 |
theta[7] | 0.262 | 0.110 | 0.076 | 0.470 | 0.002 | 0.002 | 2323.0 | 2323.0 | 2254.0 | 3422.0 | 1.0 |
theta[8] | 0.189 | 0.127 | 0.000 | 0.408 | 0.002 | 0.001 | 5756.0 | 5756.0 | 4251.0 | 2878.0 | 1.0 |
theta[9] | 0.314 | 0.088 | 0.153 | 0.476 | 0.001 | 0.001 | 4434.0 | 4431.0 | 4447.0 | 5614.0 | 1.0 |
theta[10] | 0.322 | 0.092 | 0.154 | 0.495 | 0.001 | 0.001 | 4290.0 | 4290.0 | 4273.0 | 5728.0 | 1.0 |
theta[11] | 0.282 | 0.085 | 0.131 | 0.437 | 0.001 | 0.001 | 4411.0 | 4323.0 | 4461.0 | 5124.0 | 1.0 |
theta[12] | 0.304 | 0.087 | 0.150 | 0.470 | 0.001 | 0.001 | 4219.0 | 4200.0 | 4279.0 | 5716.0 | 1.0 |
theta[13] | 0.223 | 0.146 | 0.000 | 0.476 | 0.002 | 0.001 | 6577.0 | 6577.0 | 5525.0 | 4074.0 | 1.0 |
theta[14] | 0.210 | 0.111 | 0.021 | 0.414 | 0.002 | 0.001 | 4139.0 | 4139.0 | 3687.0 | 3482.0 | 1.0 |
theta[15] | 0.236 | 0.122 | 0.028 | 0.458 | 0.002 | 0.001 | 4540.0 | 4540.0 | 4006.0 | 3202.0 | 1.0 |
theta[16] | 0.133 | 0.103 | 0.000 | 0.318 | 0.001 | 0.001 | 6605.0 | 6605.0 | 4609.0 | 2862.0 | 1.0 |
theta[17] | 0.207 | 0.128 | 0.001 | 0.430 | 0.002 | 0.001 | 4882.0 | 4882.0 | 3842.0 | 2402.0 | 1.0 |
theta[18] | 0.222 | 0.105 | 0.043 | 0.415 | 0.002 | 0.001 | 3161.0 | 3161.0 | 3078.0 | 4504.0 | 1.0 |
theta[19] | 0.230 | 0.109 | 0.046 | 0.428 | 0.002 | 0.001 | 3269.0 | 3269.0 | 3178.0 | 4160.0 | 1.0 |
theta[20] | 0.189 | 0.090 | 0.037 | 0.353 | 0.001 | 0.001 | 3694.0 | 3694.0 | 3518.0 | 4593.0 | 1.0 |
theta[21] | 0.215 | 0.102 | 0.045 | 0.408 | 0.002 | 0.001 | 3363.0 | 3363.0 | 3232.0 | 4370.0 | 1.0 |
rho | 0.211 | 0.064 | 0.087 | 0.332 | 0.001 | 0.001 | 5180.0 | 5180.0 | 5262.0 | 3748.0 | 1.0 |
mu[0] | 0.230 | 0.100 | 0.053 | 0.419 | 0.002 | 0.001 | 3899.0 | 3899.0 | 3493.0 | 2856.0 | 1.0 |
mu[1] | 0.241 | 0.093 | 0.080 | 0.418 | 0.002 | 0.001 | 2338.0 | 2338.0 | 2262.0 | 3169.0 | 1.0 |
mu[2] | 0.202 | 0.138 | 0.000 | 0.447 | 0.002 | 0.001 | 5961.0 | 5961.0 | 4364.0 | 2970.0 | 1.0 |
mu[3] | 0.307 | 0.080 | 0.164 | 0.455 | 0.001 | 0.001 | 3525.0 | 3512.0 | 3531.0 | 4945.0 | 1.0 |
mu[4] | 0.234 | 0.154 | 0.000 | 0.504 | 0.002 | 0.001 | 6848.0 | 6848.0 | 5735.0 | 3834.0 | 1.0 |
mu[5] | 0.228 | 0.117 | 0.027 | 0.444 | 0.002 | 0.001 | 4374.0 | 4374.0 | 3914.0 | 3227.0 | 1.0 |
mu[6] | 0.145 | 0.114 | 0.000 | 0.351 | 0.001 | 0.001 | 7078.0 | 7078.0 | 4699.0 | 2933.0 | 1.0 |
mu[7] | 0.219 | 0.139 | 0.000 | 0.460 | 0.002 | 0.001 | 5242.0 | 5242.0 | 4052.0 | 2295.0 | 1.0 |
mu[8] | 0.214 | 0.096 | 0.051 | 0.391 | 0.002 | 0.001 | 3137.0 | 3137.0 | 3025.0 | 3789.0 | 1.0 |
sigma2[0] | 0.131 | 0.051 | 0.056 | 0.220 | 0.001 | 0.000 | 7513.0 | 5708.0 | 10852.0 | 5458.0 | 1.0 |
sigma2[1] | 0.150 | 0.062 | 0.061 | 0.260 | 0.001 | 0.001 | 6404.0 | 5250.0 | 8836.0 | 5385.0 | 1.0 |
sigma2[2] | 0.118 | 0.043 | 0.053 | 0.193 | 0.000 | 0.000 | 8280.0 | 6082.0 | 12257.0 | 5441.0 | 1.0 |
sigma2[3] | 0.135 | 0.053 | 0.059 | 0.226 | 0.001 | 0.001 | 7057.0 | 5298.0 | 10888.0 | 5228.0 | 1.0 |
sigma2[4] | 0.117 | 0.043 | 0.054 | 0.196 | 0.000 | 0.000 | 9024.0 | 6569.0 | 13371.0 | 5450.0 | 1.0 |
sigma2[5] | 0.125 | 0.048 | 0.054 | 0.211 | 0.001 | 0.000 | 8224.0 | 6024.0 | 12240.0 | 5518.0 | 1.0 |
sigma2[6] | 0.118 | 0.043 | 0.056 | 0.200 | 0.000 | 0.000 | 8184.0 | 5615.0 | 13440.0 | 5532.0 | 1.0 |
sigma2[7] | 0.118 | 0.043 | 0.055 | 0.198 | 0.000 | 0.000 | 7992.0 | 5761.0 | 12288.0 | 4824.0 | 1.0 |
sigma2[8] | 0.138 | 0.052 | 0.058 | 0.232 | 0.001 | 0.000 | 7786.0 | 6455.0 | 9620.0 | 5729.0 | 1.0 |
iota[0] | -0.954 | 0.619 | -2.131 | 0.208 | 0.008 | 0.006 | 5775.0 | 5775.0 | 5852.0 | 4816.0 | 1.0 |
iota[1] | -1.256 | 0.566 | -2.343 | -0.197 | 0.009 | 0.006 | 4263.0 | 4263.0 | 4302.0 | 4532.0 | 1.0 |
iota[2] | -1.390 | 0.864 | -2.993 | 0.262 | 0.009 | 0.007 | 9762.0 | 8061.0 | 9793.0 | 6472.0 | 1.0 |
iota[3] | -1.571 | 0.429 | -2.392 | -0.812 | 0.006 | 0.004 | 5454.0 | 5227.0 | 5576.0 | 5157.0 | 1.0 |
iota[4] | -1.163 | 0.866 | -2.829 | 0.437 | 0.008 | 0.007 | 10823.0 | 7689.0 | 10823.0 | 6792.0 | 1.0 |
iota[5] | -0.620 | 0.734 | -2.014 | 0.734 | 0.009 | 0.007 | 5981.0 | 4847.0 | 6037.0 | 5498.0 | 1.0 |
iota[6] | -1.295 | 0.875 | -2.947 | 0.376 | 0.009 | 0.007 | 10158.0 | 7307.0 | 10212.0 | 6283.0 | 1.0 |
iota[7] | -1.039 | 0.850 | -2.644 | 0.550 | 0.009 | 0.007 | 9000.0 | 6438.0 | 9066.0 | 6105.0 | 1.0 |
iota[8] | -0.505 | 0.599 | -1.628 | 0.633 | 0.009 | 0.006 | 4381.0 | 4381.0 | 4388.0 | 4901.0 | 1.0 |
gamma[0] | 0.461 | 0.018 | 0.429 | 0.497 | 0.000 | 0.000 | 2755.0 | 2747.0 | 2819.0 | 3925.0 | 1.0 |
gamma[1] | 0.191 | 0.015 | 0.162 | 0.219 | 0.000 | 0.000 | 10384.0 | 10364.0 | 10370.0 | 5707.0 | 1.0 |
lambda[0] | 0.747 | 0.020 | 0.712 | 0.785 | 0.000 | 0.000 | 3566.0 | 3554.0 | 3625.0 | 5333.0 | 1.0 |
lambda[1] | 0.997 | 0.001 | 0.996 | 0.998 | 0.000 | 0.000 | 15214.0 | 15214.0 | 15246.0 | 5420.0 | 1.0 |
Comparison with ground truth
After fitting, one key question is the extent to which our model is successful in estimating the underlying transmission rates. To test this, we take the posterior mean as the minimum mean squared error estimate and compare it to the ground-truth parameters used to simulate the data-generating process. As shown in the table below, most parameter estimates fall reasonably close from the actual ground truth values, and all of them fall within the 94% high density interval (HDI), serving as a good indicator of our model’s accuracy.
The key condition for this comparison as proof-of-concept is the fact that our model only observes the parameter priors and the training data, but never the simulating distributions themselves. Moreover, excluding the case of antigen test accuracy rates, the priors as specifically chosen to be largely uninformative relative to the simulating distribution in order to limit the unreasonable influence of these on model performance.
Finally, a question of interest is how the accuracy of the model is affected by the number of training samples or the dimension of the parameter space. The results for both of these questions are excluded for brevity, but they tend to fall in line with what would be expected (model accuracy increases with training data and decreases with dimensionality) and the interested reader should consult the project repository for more details.
Ground Truth | Posterior Mean | HDI 3% | HDI 97% | |
---|---|---|---|---|
Parameter | ||||
theta[0] | 0.125388 | 0.250 | 0.042 | 0.444 |
theta[1] | 0.129571 | 0.224 | 0.042 | 0.410 |
theta[2] | 0.127706 | 0.212 | 0.040 | 0.390 |
theta[3] | 0.121898 | 0.228 | 0.053 | 0.404 |
theta[4] | 0.141731 | 0.231 | 0.066 | 0.420 |
theta[5] | 0.141376 | 0.223 | 0.062 | 0.410 |
theta[6] | 0.116330 | 0.267 | 0.080 | 0.471 |
theta[7] | 0.140423 | 0.262 | 0.076 | 0.470 |
theta[8] | 0.159181 | 0.189 | 0.000 | 0.408 |
theta[9] | 0.126710 | 0.314 | 0.153 | 0.476 |
theta[10] | 0.133879 | 0.322 | 0.154 | 0.495 |
theta[11] | 0.129020 | 0.282 | 0.131 | 0.437 |
theta[12] | 0.144806 | 0.304 | 0.150 | 0.470 |
theta[13] | 0.032578 | 0.223 | 0.000 | 0.476 |
theta[14] | 0.117005 | 0.210 | 0.021 | 0.414 |
theta[15] | 0.133092 | 0.236 | 0.028 | 0.458 |
theta[16] | 0.108155 | 0.133 | 0.000 | 0.318 |
theta[17] | 0.065746 | 0.207 | 0.001 | 0.430 |
theta[18] | 0.105766 | 0.222 | 0.043 | 0.415 |
theta[19] | 0.124622 | 0.230 | 0.046 | 0.428 |
theta[20] | 0.121615 | 0.189 | 0.037 | 0.353 |
theta[21] | 0.120505 | 0.215 | 0.045 | 0.408 |
rho | 0.183874 | 0.211 | 0.087 | 0.332 |
mu[0] | 0.119764 | 0.230 | 0.053 | 0.419 |
mu[1] | 0.133596 | 0.241 | 0.080 | 0.418 |
mu[2] | 0.140694 | 0.202 | 0.000 | 0.447 |
mu[3] | 0.136886 | 0.307 | 0.164 | 0.455 |
mu[4] | 0.035124 | 0.234 | 0.000 | 0.504 |
mu[5] | 0.137612 | 0.228 | 0.027 | 0.444 |
mu[6] | 0.101337 | 0.145 | 0.000 | 0.351 |
mu[7] | 0.075024 | 0.219 | 0.000 | 0.460 |
mu[8] | 0.118877 | 0.214 | 0.051 | 0.391 |
sigma2[0] | 0.101624 | 0.131 | 0.056 | 0.220 |
sigma2[1] | 0.097299 | 0.150 | 0.061 | 0.260 |
sigma2[2] | 0.094004 | 0.118 | 0.053 | 0.193 |
sigma2[3] | 0.103104 | 0.135 | 0.059 | 0.226 |
sigma2[4] | 0.105677 | 0.117 | 0.054 | 0.196 |
sigma2[5] | 0.091571 | 0.125 | 0.054 | 0.211 |
sigma2[6] | 0.107485 | 0.118 | 0.056 | 0.200 |
sigma2[7] | 0.106212 | 0.118 | 0.055 | 0.198 |
sigma2[8] | 0.105388 | 0.138 | 0.058 | 0.232 |
iota[0] | -0.980943 | -0.954 | -2.131 | 0.208 |
iota[1] | -1.108789 | -1.256 | -2.343 | -0.197 |
iota[2] | -0.864302 | -1.390 | -2.993 | 0.262 |
iota[3] | -1.395580 | -1.571 | -2.392 | -0.812 |
iota[4] | -1.423259 | -1.163 | -2.829 | 0.437 |
iota[5] | -0.037854 | -0.620 | -2.014 | 0.734 |
iota[6] | -0.919838 | -1.295 | -2.947 | 0.376 |
iota[7] | -1.245577 | -1.039 | -2.644 | 0.550 |
iota[8] | -0.524771 | -0.505 | -1.628 | 0.633 |
gamma[0] | 0.563992 | 0.461 | 0.429 | 0.497 |
gamma[1] | 0.180336 | 0.191 | 0.162 | 0.219 |
lambda[0] | 0.808996 | 0.747 | 0.712 | 0.785 |
lambda[1] | 0.979214 | 0.997 | 0.996 | 0.998 |
Conclusion and future work
This project aimed to understand & model the nature of setting-specific COVID transmission with the purpose of informing and improving related policy interventions. The first-principles model presented in this article allows us to do this in a way that properly captures the data-generating process behind setting-specific transmission, all with considerations of a simple and feasible data collection methodology. Furthermore, the Bayesian nature of our model allows flexibility when incorporating expert knowledge through priors, as exemplified with the calibrated antigen test accuracy parameters, and provides accurate estimates even with few training samples thanks to its generative nature. Finally, from an implementation perspective, our model is built with scalability in mind, as the TensorFlow Probability version makes it feasible for large-scale applications with the aid of distributed computing architecture.
There is still much left to be done and many possible extensions could be added to make our model more robust and powerful. One possible angle for future work would be extending the model to encapsulate the effect of other policy interventions (such as social distancing regulations, vaccination, etc.) in order to derive insights about the effectiveness of these in different settings. Another possible extension could be to add temporal dynamics into the model by combining with an SIR-type model.
Overall, we believe this model serves as a great case for statistical modeling of setting-specific epidemic transmission, with great potential for future extensions and applications to similar settings. Additionally, it serves as a great example of the power of Bayesian modelling and the many benefits it can bring in applied settings.