The toy production line, revisited

Posted in math on Sunday, May 24 2015

I built a toy model of a production line, earlier, and now I'm back to bludgeon it with more statistics. To recap, in The Goal there is a toy model of a factory, in which boy scouts move matches from one bowl to the next trying to move matches to the end of the line, how many they can move from their bowl determined by a roll of a dice.

I created a Producer class and a ProductionLine class to perform these tasks in a simulation. The system receives 3.5 units/time step at the head of the production line, and inventory just falls off the end.

I didn't really like how that turned out, partly because my simulation ignored the first and last producer in the list, I accidentally had the random number generator picking numbers between 1 and 5 instead of 1 and 6, and I only tracked the cumulative performance, which really just showed how far the machines the production line were falling behind. Since this grows ever larger with time it just blows up and isn't very instructive.

What I would like to look at is how the time-average performance changes. Each producer on its own should have a time-average performance of 3.5 units/time step.

I modified the Producer class to track the performance as the time averge performance instead of the cumulative performance. I also fixed it so the first and last producer are accessible and random number generator has the right bounds.

%%px --local

import numpy as np

class Producer(object):
    def __init__(self, name):
        self.name = name
        self.inventory = 0
        self.cumul = 0
        self.steps = 0
        self.performance = 0.0
        self.next = None

    def __repr__(self):
        return "Id {:d}, Inventory: {:d}, Performance: {:0.1f} ".format(self.name, self.inventory, self.performance)

    def receive(self, amount):
        self.inventory += amount

    def move(self):
        # move pieces to the next unit in line
        can_make = np.random.randint(low=1, high=7)

        if self.name > 0:
            # subsequent units can only make what they have inventory for
            if can_make > self.inventory:
                does_make = self.inventory
            else:
                does_make = can_make
            self.inventory -= does_make
        else:
            # the first unit can make as many as it wants
            does_make = can_make

        self.cumul += does_make
        self.steps += 1
        self.performance = self.cumul/self.steps

        if self.next != None:
            self.next.receive(does_make)

class ProductionLine(object):
    def __init__(self):
        # initialize the production line, there nothing here though
        self.head = None
        self.tail = None
        self.cur = None

    def __iter__(self):
        # return the cursor to the head of the line
        self.cur = self.head
        return self

    def __next__(self):
        # walk down the line, advancing one time step at a time
        if self.cur != None:
            self.cur.move()
            current = self.cur

            # Advance the cursor
            self.cur = self.cur.next

            return current

        else:
            raise StopIteration

    def append(self, producer):
        # Add a new producer to the end of the line
        if self.head == None:
            self.head = producer

        if self.tail != None:
            self.tail.next = producer

        self.tail = producer

def simul(runtime):
    production = ProductionLine()

    # set up production line of 20 producers
    for i in range(20):
        p = Producer(i)
        production.append(p)

    # run the production line for runtime
    for j in range(runtime):
        for p in production:
            pass
    return np.fromiter((p.performance for p in production), dtype=np.float, count=20)

Before I start running large swaths of simulations I want to know how long it takes for the time-average to settle down to a value. So lets run one production line for 1000 time steps and look at the average production rate over time.

png

Everything more or less calms down by 600 time steps, but just because I want to be sure I'll run each production line for 1000 time steps.

To get some statistics I will run 1000 production lines for 1000 time steps and look at the results. This I am going to run in parallel less to save time and more because hey I can run things in parallel.

I can plot some box plots to show how the average production rate of the piece of equipment is impacted by its position in the line. It is awkward to phrase, the box plot shows the average and spread for 1000 time-averaged production rates, each one averaged over 1000 time-steps.

png

png

I think this makes the point more clearly, the first machine in the line can operate freely, making as many as it can, its time average performance is centered around 3.5. The last machine in the line is constrained by all the machines before it, and its time average performance is now centered around 3.1.

This last machine has become the rate-limiting step in the whole production line, it can only produce 3.1 units/time step even though each machine in the line can, individually, produce 3.5 units/time step. This is an 11% drop in productivity purely because of variability in instantaneous production rates.

As usual the ipython notebook is on github