Partitioning Teams Part 2: Electric Boogaloo

Posted in math on Thursday, April 28 2016

In a previous post I looked at various ways of partitioning up a group of people into teams, such that each individual's preferences of teammates is taken into consideration and the overall happiness of the team, and thus the corporation, is maximized. I've been spinning ever more complicated way of doing this in my head, so why not try one out?

As a refresher I have the following:

  1. A Person as an object that has a set of 5 friends, other Person objects
  2. A company is the larger pool of persons
  3. A team is a subset of the company, with the teams equally dividing the company.
  4. A person's happiness is the intersection between the set of people they consider a friend and the set of people that is their team.
  5. The happiness of a team is simply the sum of each individuals happiness, and ditto for the company overall.

 

import random

class Person(object):
    def __init__(self):
        self.friends = set()

def makecomp(size):
    company = { Person() for i in range(size)}
    for person in company:
        person.friends = set(random.sample(company,5))
    return company

def teamhappiness(team):
    return sum(map(lambda person: len(team & person.friends), team))

def overallhappiness(*teams):
    return sum(map(teamhappiness,teams))

Switching Two People

The best idea I tried before was to try randomly swapping team members, if the swap improves overall happiness keep it and if it doesn't keep the original team arrangement.

  1. Pick 2 teams at random
  2. Randomly pick 1 member from each team.
  3. Swap them
  4. If the overall happiness improves, keep the swap
  5. Repeat.

I call this paired switching.

def pairedswitch(teams, trials, staglimit):
    stagnation = 0
    for i in range(trials):
        random.shuffle(teams)
        team1, team2 = teams[0], teams[1]
        pick1 = set(random.sample(team1, 1))
        pick2 = set(random.sample(team2, 1))
        newteam1 = (team1 - pick1) | pick2
        newteam2 = (team2 - pick2) | pick1

        newteams = [newteam1, newteam2] + teams[2:]

        if overallhappiness(*newteams) > overallhappiness(*teams):
            stagnation = 0
            teams = newteams
        else:
            stagnation += 1

        if stagnation>staglimit:
            break
    return teams, i

Wrapping this up in a nice generator, I can go forth and do a statistically significant number of simulations

def pairedsimul(compsize, numteams, numtrials, staglimit=200):
    for i in range(numtrials):
        company = makecomp(compsize)
        teams = []
        teamsize = compsize//numteams
        for team in range(numteams):
            team = set(random.sample(company,teamsize))
            teams.append(team)
            company = company-team

        teams, steps = pairedswitch(teams, 5000, staglimit)

        yield (overallhappiness(*teams), steps)

Simulated Annealing

The last method only switches two people at a time. Maybe this forces it to get locked into local maxima? Maybe we can switch larger subsets, say up to 10 people at a time?

To do this with simulated annealing I'm going to assume an acceptance criteria of: yes if the switch improves overall happiness, or with a probability given by the Boltzmann distribution, with a given temperature that cools off over time (hence annealing). When the temperature is hot the probability of accepting otherwise bad swaps is high, and as it cools it drops off.

In terms of code, I use a generator do to the cooling off, and create my own callable that has a memory of what the old score was to be the gate keeper. This simplifies the actual annealing process greatly.

import math

def cooling(initial, factor):
    temp = initial
    while True:
        yield temp
        temp = factor*temp


class Acceptor(object):
    def __init__(self, firstscore, factor):
        self.oldscore = firstscore
        self.temperator = cooling(2*firstscore, factor)

    def __call__(self, score):
        if score > self.oldscore:
            self.oldscore = score
            return True
        else:
            temperature = self.temperator.__next__()
            if temperature > 1e-40:
                criterion = math.exp(-abs(score-self.oldscore)/temperature)
                self.oldscore = score
                return random.random() < criterion
            else:
                return False

With these in hand I can make an anneal method, which is a simple extension of the previous pairwise switching method.

def anneal(teams, trials, maxswap, factor, staglimit):
    stagnation = 0
    acceptor = Acceptor(overallhappiness(*teams), factor)

    for i in range(trials):
        swaps = random.randint(1,maxswap)
        random.shuffle(teams)
        team1, team2 = teams[0], teams[1]
        pick1 = set(random.sample(team1, swaps))
        pick2 = set(random.sample(team2, swaps))
        newteam1 = (team1 - pick1) | pick2
        newteam2 = (team2 - pick2) | pick1

        newteams = [newteam1, newteam2] + teams[2:]
        score = overallhappiness(*newteams)

        if acceptor(score):
            stagnation = 0
            teams = newteams
        else:
            stagnation += 1

        if stagnation>staglimit:
            break
    return teams, i
def annealsimul(compsize, numteams, numtrials, factor=0.5, maxiter=5000, maxswap=1, staglimit=200):
    for i in range(numtrials):
        company = makecomp(compsize)
        teams = []
        teamsize = compsize//numteams
        for team in range(numteams):
            team = set(random.sample(company,teamsize))
            teams.append(team)
            company = company-team

        teams, steps = anneal(teams, maxiter, maxswap, factor, staglimit)

        yield (overallhappiness(*teams), steps)

Now there are two important variables that can be manipulated: The factor which controls how quickly the temperature cools off (and how frequently apparently bad swaps are accepted), and the maxswap which is the maximum size of the subpopulation that is swapped between two teams.

To explore this I'm going to just try a range of values and see what impact it has on anything.

import numpy as np

fscore = []
for factor in np.linspace(0.01, 1.0, endpoint=False, num=20):
    results = np.array([row for row in annealsimul(90, 3, 50, factor=factor, maxswap=10)])
    row = [factor, results[:,0].mean(), results[:,0].std(), results[:,1].mean(), results[:,1].std()]
    fscore.append(row)

png

It looks like the factor has little impact on what the final result is, in fact up to a factor of ~0.5 it doesn't seem to have an impact on anything. After that point the number of iterations jumps way up but there is no concomitant gain in happiness score. I'm guessing at this point the system is just spending way too much time accepting terrible swaps just because maybe? and it is just a waste of time.

To continue I'm going to just pick a middling value of 0.5 and barrel forward and explore the impact of maxswap.

nscore = []
for swap in range(1,15,1):
    results = np.array([row for row in annealsimul(90, 3, 50, factor=0.5, maxswap=swap)])
    row = [swap, results[:,0].mean(), results[:,0].std(), results[:,1].mean(), results[:,1].std()]
    nscore.append(row)

png

This is interesting.

I was trying large swap sizes since I thought maybe just maybe I'm missing out on really good configurations that I could only get to with large sizes of swaps. However with large swaps I also increase the chances of making a swap that is really poor overall and it looks like this effect dominates. The larger the swap size the worse the performance, it seems.

Comparing the two methods

Time to compare the two methods with the hand-waving maybe-sorta best values I found from above, that is with a cool off factor of 0.5 and only swapping one person from each team at a time (i.e. 2 people).

png

Well, if there is a statistically significant difference between the final scores of the two methods it isn't jumping out at me.

But maybe my annealing method is faster?

png

Nope. If anything annealing looks worse overall (though I'm just eyeballing so it could be just a statistical fluke).

This is even worse than it looks on the surface because it is just counting the number of times the pair-wise swap is done. However the annealing method has a more complicated acceptance criteria so it is actually more computationally intensive per step. It also appears to take many more steps to get there.

I would guess it is that, at least with random networks, nothing is gained from taking a chance on unlikely steps leading to new directions, avoiding some local maxima. Possibly because there is only a global maxima and my premise right from the beginning -- that the pairwise swap can get trapped in some local maxima and miss a greater overall maxima -- is just outright wrong.

Brute Force?

After all this dilly-dallying, why couldn't I just brute force it and try every possible way of splitting? How hard could that be?

Well, let's consider how may ways there are to split $N$ people evenly into $n$ teams of $k$ people:

For the first team it is simply: $$ { N \choose k } $$

The second team is chosen from the remainder so: $$ { N-k \choose k }$$

And so on until the final team which is $$ { k \choose k }$$

Which means the final result is:

$$ {N \choose k} {N-k \choose k} \dots {k \choose k} \frac{1}{n!}$$

Dividing by $n!$ because order is not important.

So we can rewrite the above equation, taking into account that $N = nk$:

$$ \frac{1}{n!} \prod_{i=1}^{n} { ik \choose k} = { 1 \over n!} \prod_{i=1}^{n} { \left( ik \right)! \over k! \left( ik -k \right)!} = { 1 \over n! \left(k!\right)^{n} } \prod_{i=1}^{n} { \left( ik \right)! \over \left( \left( i-1 \right) k \right)!} $$

But that's nice! Each term in the product cancels the previous term, consider:

$$ { \left( 1k \right)! \over \left( 0k \right)!} { \left( 2k \right)! \over \left( 1k \right)!}{ \left( 3k \right)! \over \left( 2k \right)!} \dots { \left( (n-1)k \right)! \over \left( \left( n-2 \right) k \right)!}{ \left( nk \right)! \over \left( \left( n-1 \right) k \right)!}$$

So we are left with:

$$ \frac{1}{n!} \prod_{i=1}^{n} { ik \choose k} = { (nk)! \over n! \left(k!\right)^{n} } $$

Which is a nice result. Using that I can calculate the number of combinations of teams that I would need to brute force. There are $1.33 \times 10^{40}$ possible team configurations. That is a lot. Suppose it took one microsecond to evaluate each possible team arrangement (ignoring how I got all of them) then it would take $4.21 \times 10^{26}$ years to evaluate all possible teams. Which is several times the current age of the universe.

As a random aside, if $k$ is particularly large we can use Stirling's Approximation to come up with a nice result:

Let the number of teams be $T$

$$T = { (nk)! \over n! \left(k!\right)^{n} } $$ $$ \ln{T} = \ln{(nk)!} - \ln{n!} - n \ln{k!} $$ $$ \ln{T} \approx nk \ln{nk} -nk - \ln{n!} - nk \ln{k} + nk $$ $$ \ln{T} \approx nk \ln{nk} - \ln{n!} - nk \ln{k} $$ $$ \ln{T} \approx nk \ln{n} - \ln{n!}$$ $$ T \approx { n^{nk} \over n! } $$

As usual the ipython notebook is on github