Skip to content

These are a couple katas attempting at solving the N Queens problem set using diffrent algorithms and huristics.

Notifications You must be signed in to change notification settings

MassiGy/Nqueens.py

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 

Repository files navigation

# Coding Katas showcasing a couple resolution methods for the nqueens problem set.

```
Kata 1: Backtracking, it is an enumeration oriented approach. So this is okey for small problem
sets, because otherwise, it is too much expensive, computation time wise.

Kata 2: Now backtracking is equiped with min/max huristics to set priorities in the exploration, 
which generally leads us to a solution much faster. (smart backtracking).

Kata 3: random state generation. First without any quality grades, and then with a quality grade
to guide the random state generation (gradiant algo). Then, we will use another algo, we will
basically relax our quality grade function and let it accept the 'worse' generated state at first
and decrease that in a geometric fashion (simulated annealing).

Kata 4: We will use a genetic algo where we will start with a population of states and then at
each genetic evolution we generate a new population with some genetic operations like cross-over,
mutation...  each time we check if there is a solution within our population, otherwise we'll
continue.  When generating the new population we can enhance the genetic evolution by selecting the
individuals that host the best fitting characteristics for our domain. Thus we can keep the
population size constant while it is getting better and better for the underlying environment (our
problem).

While generating the next population, another optimisation to consider is to select the best fitting
individuals and the last 10% (last=worst fitting) + first 10% (first=best fitting). Taking the last
& first 10% individuals will help our algo not getting stuck on a local optimum of the
caracteristics curve per say. It helps to shake up our generated population.


Kata 5: Particle swarm optimization - Optimisation par essain de particules. 

The idea is to simulate the behavior of a collective intelligence. We will have a population of
individuals, each one has an initial position, and an initial velocity vector which determines how
his next move will be. Additionally, the individuals will be in contact/synch which their closest
relatives. This makes the group intelligence much more efficient since we have a network of
autonomous individuals which can learn from each other.

So, just like a herd of birds, if one of them sees a food source and flies towards it, everyone
around follows, which by a transitive manner, makes everyone follow.

Also, the velocity of each bird is proportional to how far it is relative to the food source. The
closer it is, the less velocity this bird needs.

NOTE: The analogy above implies that we know where the solution is, this is usually the case where
we are visualizing the algorithm since we are the ones setting the solution (with our cursor for
example). But, in the practical sense, we do not know the solution, so we can not judge how fast we
can go. Which in turns means that the comment above is not a part of our algorithm.

So,
- we will start with a population of individuals,
- for each individual, we set an initial random velocity vector.
- for each individual, we initialize two vectors:
    - best self position.
    - best neighbor position.

- while solution is not found: 
	For each individual:
	- iterate through the neighbors to find the best neighbor position:
		- If we do not have a max/min (not a totally ordered set) if we only have maximals or minimals,
		we calculate the barycentric coordinates of these maximals/minimal to find the best overall position.

	    """"

		Correction:
		
		We should not use the barycentric, since doing that will make us go to the
	    center of the herd, which is not really what we want !

	    Also, we should not go/teleport to the best found position, because the owner of that
	    position might be stuck in a local optimum. Cuz, maybe by combining my best self
	    position and his best position, the current individual can go to a better position than
	    both of the other two. (This scales to N positions, thus N neighbors as well).

	    What we should do is to go towards it, and let the target move, which can also help us
	    move to the next best position of that target.  
		
		"""

	    So,
	    - If we have a single individual with the max/min fitness:
		- we calc the difference between his best position and our best position.
		- we move by a portion using the resulting vector (do not teleport, just get
		  closer).

	    - If we have multiple individuals/neighbors with the max/min fitness (maximals and
	      minimals - not a totally ordered set)
		- choose one of them, randomly:
		    - we calc the difference between his best position and our best position.
		    - we move by a portion using the resulting vector (do not teleport, just get
		      closer).

	- check if current_individual is on the solution or close enough (with enough precision)
	    - if so, solution_found = true


---

General approach (Montra):
* In backtracking, random solver and gradiant algo, we are exploring solutions without optimizing 
the current solution. 
* For the other resolutions methods we go from a solution and we optimize it. (PSO, genetic algo, 
simmulated annealing.)
---

Note:
* Kata on partical swarm optimisation is to be done, here
is a link as a refrence: https://github.com/0la0/psoViz

````

About

These are a couple katas attempting at solving the N Queens problem set using diffrent algorithms and huristics.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages