How about the following short "manual calculation"?

def weighted_avg_and_std(values, weights):
    """
    Return the weighted average and standard deviation.

    values, weights -- Numpy ndarrays with the same shape.
    """
    average = numpy.average(values, weights=weights)
    # Fast and numerically precise:
    variance = numpy.average((values-average)**2, weights=weights)
    return (average, math.sqrt(variance))

There is a class in statsmodels that makes it easy to calculate weighted statistics: statsmodels.stats.weightstats.DescrStatsW.

Assuming this dataset and weights:

import numpy as np
from statsmodels.stats.weightstats import DescrStatsW

array = np.array([1,2,1,2,1,2,1,3])
weights = np.ones_like(array)
weights[3] = 100

You initialize the class (note that you have to pass in the correction factor, the delta degrees of freedom at this point):

weighted_stats = DescrStatsW(array, weights=weights, ddof=0)

Then you can calculate:

  • .mean the weighted mean:

    >>> weighted_stats.mean      
    1.97196261682243
    
  • .std the weighted standard deviation:

    >>> weighted_stats.std       
    0.21434289609681711
    
  • .var the weighted variance:

    >>> weighted_stats.var       
    0.045942877107170932
    
  • .std_mean the standard error of weighted mean:

    >>> weighted_stats.std_mean  
    0.020818822467555047
    

    Just in case you're interested in the relation between the standard error and the standard deviation: The standard error is (for ddof == 0) calculated as the weighted standard deviation divided by the square root of the sum of the weights minus 1 (corresponding source for statsmodels version 0.9 on GitHub):

    standard_error = standard_deviation / sqrt(sum(weights) - 1)
    

Here's one more option:

np.sqrt(np.cov(values, aweights=weights))

There doesn't appear to be such a function in numpy/scipy yet, but there is a ticket proposing this added functionality. Included there you will find Statistics.py which implements weighted standard deviations.