dcfl uses the Communication Free Learning algorithm presented by KR Duffy, et al. in Decentralized Constraint Satisfaction. The primary benefit of the algorithm over previous approaches is its distributed nature, making it a prime candidate for parallelization.
KR Duffy, et al. provide a simple example problem of constraint satisfaction which we can solve with dcfl. Suppose we have five routers within an area and each of them has to be on a different channel within a spectrum and we assume that we have been given five different channels. This is essentially a graph coloring problem on a complete graph. To solve it using dcfl, we can use the following code:
import Data.DCFL import Data.List colorVariables n = map (\x -> Variable [0..n-1] 0 $ initDistribution n) [0..n-1] colorConstraint:: Int -> Int -> [Int] -> Bool colorConstraint n1 n2 vars = (vars !! n1) /= (vars !! n2) colorConstraints n = [ConstraintEl [a, b] $ colorConstraint a b | a <- [0..n-1], b <- [0..n-1], a /= b] main = do rsolved <- solveParallel (colorVariables 5) (colorConstraints 5) putStrLn $ show $ getValues $ variables rsolved
Although the Hackage documentation describes the types this example uses, it is worthwhile to look into portions of this code. To describe the constraint satisfaction problem (CSP), we have a set of variables and a set of constraints on the variables. Each variable can take on a list of values and each one holds a certain value at a given time. The value the variable holds is described as the index in the list of values it can possibly taken on. A variable also has an associated distribution, which is used internally by the algorithm to solve the CSP. So, when we declare a variable, we can do it as:
Variable listOfValues currentValueIndex distribution
We define colorVariables
as a function which lets us create a set of variables which can each take on values between \(0\) and \(n-1\) for a given parameter \(n\). The only other concept from dcfl used in the definition is initDistribution
, which initializes a flat distribution over \(n\) values.
We also define colorConstraint
, which takes indices of two variables and a list of variables and makes sure that they aren't the same value. We then make a list of these constraints for every pair of variables we are considering and call the list colorConstraints
. In the context of the problem we are considering, the constraints make sure that no two nodes in our complete graph have the same color, i.e. no two routers are on the same channel.
Finally, we pass on this information to solveParallel
to solve the constraint satisfaction problem. Note that in order for the parallelization to occur, we have to the use the -threaded
option when using GHC.
Having decoupled constraints generally (i.e. to a certain extent) helps the efficiency of Communication Free Learning since it can alter the values of some sets of variables independently of others.
Due to the nature of the algorithm, this means that the solveParallel
will continue forever. It is possible to hook into solveParallel
in order to count the number of iterations passed before "giving up."