Question

How do you randomly select a table row in T-SQL based on an applied weight for all candidate rows?

For example, I have a set of rows in a table weighted at 50, 25, and 25 (which adds up to 100 but does not need to), and I want to select one of them randomly with a statistical outcome equivalent to the respective weight.

Was it helpful?

Solution

Dane's answer includes a self joins in a way that introduces a square law. (n*n/2) rows after the join where there are n rows in the table.

What would be more ideal is to be able to just parse the table once.

DECLARE @id int, @weight_sum int, @weight_point int
DECLARE @table TABLE (id int, weight int)

INSERT INTO @table(id, weight) VALUES(1, 50)
INSERT INTO @table(id, weight) VALUES(2, 25)
INSERT INTO @table(id, weight) VALUES(3, 25)

SELECT @weight_sum = SUM(weight)
FROM @table

SELECT @weight_point = FLOOR(((@weight_sum - 1) * RAND() + 1), 0)

SELECT
    @id = CASE WHEN @weight_point < 0 THEN @id ELSE [table].id END,
    @weight_point = @weight_point - [table].weight
FROM
    @table [table]
ORDER BY
    [table].Weight DESC

This will go through the table, setting @id to each record's id value while at the same time decrementing @weight point. Eventually, the @weight_point will go negative. This means that the SUM of all preceding weights is greater than the randomly chosen target value. This is the record we want, so from that point onwards we set @id to itself (ignoring any IDs in the table).

This runs through the table just once, but does have to run through the entire table even if the chosen value is the first record. Because the average position is half way through the table (and less if ordered by ascending weight) writing a loop could possibly be faster... (Especially if the weightings are in common groups):

DECLARE @id int, @weight_sum int, @weight_point int, @next_weight int, @row_count int
DECLARE @table TABLE (id int, weight int)

INSERT INTO @table(id, weight) VALUES(1, 50)
INSERT INTO @table(id, weight) VALUES(2, 25)
INSERT INTO @table(id, weight) VALUES(3, 25)

SELECT @weight_sum = SUM(weight)
FROM @table

SELECT @weight_point = ROUND(((@weight_sum - 1) * RAND() + 1), 0)

SELECT @next_weight = MAX(weight) FROM @table
SELECT @row_count   = COUNT(*)    FROM @table
SET @weight_point = @weight_point - (@next_weight * @row_count)

WHILE (@weight_point > 0)
BEGIN
    SELECT @next_weight = MAX(weight) FROM @table WHERE weight < @next_weight
    SELECT @row_count   = COUNT(*)    FROM @table WHERE weight = @next_weight
    SET @weight_point = @weight_point - (@next_weight * @row_count)
END

-- # Once the @weight_point is less than 0, we know that the randomly chosen record
-- # is in the group of records WHERE [table].weight = @next_weight

SELECT @row_count = FLOOR(((@row_count - 1) * RAND() + 1), 0)

SELECT
    @id = CASE WHEN @row_count < 0 THEN @id ELSE [table].id END,
    @row_count = @row_count - 1
FROM
    @table [table]
WHERE
    [table].weight = @next_weight
ORDER BY
    [table].Weight DESC

OTHER TIPS

You simply need to sum the weights of all candidate rows, then choose a random point within that sum, then select the record that coordinates with that chosen point (each record is incrementally carrying an accumulating weight sum with it).

DECLARE @id int, @weight_sum int, @weight_point int
DECLARE @table TABLE (id int, weight int)

INSERT INTO @table(id, weight) VALUES(1, 50)
INSERT INTO @table(id, weight) VALUES(2, 25)
INSERT INTO @table(id, weight) VALUES(3, 25)

SELECT @weight_sum = SUM(weight)
FROM @table

SELECT @weight_point = ROUND(((@weight_sum - 1) * RAND() + 1), 0)

SELECT TOP 1 @id = t1.id
FROM @table t1, @table t2
WHERE t1.id >= t2.id
GROUP BY t1.id
HAVING SUM(t2.weight) >= @weight_point
ORDER BY t1.id

SELECT @id

The "incrementally carrying a an accumlating[sic] weight sum" part is expensive if you have a lot of records. If you also already have a wide range of scores/weights (ie: the range is wide enough that most records weights are unique. 1-5 stars probably wouldn't cut it), you can do something like this to pick a weight value. I'm using VB.Net here to demonstrate, but this could easily be done in pure Sql as well:

Function PickScore()
    'Assume we have a database wrapper class instance called SQL and seeded a PRNG already
    'Get count of scores in database
    Dim ScoreCount As Double = SQL.ExecuteScalar("SELECT COUNT(score) FROM [MyTable]")
    ' You could also approximate this with just the number of records in the table, which might be faster.

    'Random number between 0 and 1 with ScoreCount possible values
    Dim rand As Double = Random.GetNext(ScoreCount) / ScoreCount

    'Use the equation y = 1 - x^3 to skew results in favor of higher scores
    ' For x between 0 and 1, y is also between 0 and 1 with a strong bias towards 1
    rand = 1 - (rand * rand * rand)

    'Now we need to map the (0,1] vector to [1,Maxscore].
    'Just find MaxScore and mutliply by rand
    Dim MaxScore As UInteger = SQL.ExecuteScalar("SELECT MAX(Score) FROM Songs")
    Return MaxScore * rand
End Function

Run this, and pick the record with the largest score less than the returned weight. If more than one record share that score, pick it at random. The advantages here are that you don't have to maintain any sums, and you can tweak the probability equation used to suit your tastes. But again, it works best with a larger distribution of scores.

The way to do this with random number generators is to integrate the probabiliity density function. With a set of discrete values you can calculate the prefix sum (sum of all values up to this one) and store it. With this you select the minumum prefix sum (aggregate to date) value greater than the random number.

On a database the subsequent values after an insertion have to be updated. If the relative frequency of updates and size of the data set doesn't make the cost of doing this prohibitive it means that the appropriate value can be obtained in from a single s-argable (predicate that can be resolved by an index lookup) query.

If you need to do get a group of samples (say, you want to sample 50 rows from a collection of 5M rows) where each row has a column called Weight which is an int and where larger values means more weight, you can use this function:

SELECT * 
FROM 
(
    SELECT TOP 50 RowData, Weight 
    FROM MyTable 
    ORDER BY POWER(RAND(CAST(NEWID() AS VARBINARY)), (1.0/Weight)) DESC
) X 
ORDER BY Weight DESC

The key here is using the POWER( ) function as illustrated here

A reference on the choice of a random function is here and here

Alternatively you can use:

1.0 * ABS(CAST(CHECKSUM(NEWID()) AS bigint)) / CAST(0x7FFFFFFF AS INT) 

You cast checksum as BIGINT instead of INT because of this issue:

Because checksum returns an int, and the range of an int is -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647), the abs() function can return an overflow error if the result happens to be exactly -2,147,483,648! The chances are obviously very low, about 1 in 4 billion, however we were running it over a ~1.8b row table every day, so it was happening about once a week! Fix is to cast the checksum to bigint before the abs.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top