I did some search and this is what I have found:

  • Bounds for tail probabilities of a weighted sum of Bernoulli trials
    This seminal paper of Raghavan in 1988: "Probabilistic Construction of Deterministic Algorithms: Approximating packing integer programs". In section 1.1 he derives some bounds, which seems to be important as part of an stochastic optimization technique called randomized rounding.
  • Learning the distribution of a weigthed sum of Bernoulli trials
    In this paper published this year by Daskalis et al.: "Learning Poisson Binomial Distributions". In theorem 2 they states that it's possible to construct an algorithm that learns the desired distribution in polynomial time.
  • Code to compute the distribution of a weigthed sum of Bernoulli trials
    This post in researchgate.net of Dayan Adoniel also asked the same and it seems that Dayan developed a code to compute the desired distribution and that he is willing to share it.

Unfortunately I do not have the time now to go over the details of the first two references but hopefully you can find them useful.


Edit 1: Bounds in the tails of the distribution of a weighted sum of independent Bernoulli trials [Raghavan, 1988]

Let $a_1, a_2, \ldots, a_r$ be reals in $(0, 1]$. Let $X_1, X_2, \ldots, X_r$ be independent Bernoulli trials with $E[X_j] = p_j$. Define $\Psi = \sum_{j=1}^r a_jX_j$ with $E[\Psi] = m$.

Theorem 1 (Deviation of $\Psi$ above its mean). Let $\delta > 0$ and $m = E[\Psi] > 0$. Then

$$\mathbf{P}(\Psi > m(1+\delta)) < \left[\frac{e^\delta}{(1+\delta)^{(1+\delta)}}\right]^m.$$

Theorem 2 (Deviation of $\Psi$ below its mean). For $\gamma \in (0,1]$,

$$\mathbf{P}(\Psi-m < -\gamma m) < \left[\frac{e^\gamma}{(1+\gamma)^{(1+\gamma)}}\right]^m.$$