Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Does genetic algorithm always converge to the efficient solution?

Status
Not open for further replies.

mahaju

Full Member level 2
Joined
Mar 17, 2007
Messages
125
Helped
7
Reputation
14
Reaction score
2
Trophy points
1,298
Activity points
2,252
Hello
Does genetic algorithm always converge towards and optimal solution? Are there any papers or articles that have proved this mathematically or empirically?
The process or crossover preserving genes that are closer to optimal sounds somewhat logical, but couldn't an improper mutation completely diverge the solution away from optimal?

Thanks in advance.
 

GA always hopes to find an optimal solution.
since it is applied only to difficult to solve ones , for which no known solution exist .
applying ga you get a solution , which is the optimal compared to some others solution for the same problem.

mutation does indeed may go away from the current solution , purposely to avoid local mininma.
 

So are there any references where this has been actually proved in any way?
 

Genetic Algorithm (GA) can converge to local maximums. This means that a GA's solution is not necessarily the overall global best solution, but the main idea of GA is to try and avoid that. Its my understanding that how well a GA behaves (how fast it converges, which maxima it selects, etc.) depends heavily on the design of the weighting/fitness calculation as well as the mutation rate and strength. I don't know of any papers offhand, but I am sure they exist. A search of GA in IEEE comes up with a lot of hits.

Similarly, as to your last question, since it is a random process, it is always possible to derail a solution to a less optimal result. But that is just my gut feeling opinion on the matter.
 

Yes I understand GA can converge to local maximum instead of global one, but to do that as well it should always produce a result that is better than the previous one until it reaches the local maximum value.What I want to see is any references where this has been proved, mathematically or otherwise.
 

The way I kind of view it is that the GA finds a solution in a 'well' with a local maximum. In this case the likely result is to find another solution that has a better fitness, but is still within the local well. This continues until the local maximum is reached. The only way to get out of the well is if a random mutation is not only far enough away that it is out of the well, but is also a better fitness than the local maximum (or any of the results on the way to the local maximum).

I realize this doesn't answer your question about papers. I would guess that to calculate the chances of getting out of a local maxima well depends on the problem and the way the GA is implemented (ie. the fitness algorithm and other details like mutation rate). It doesn't seem likely to me that there are any papers that do this in a very general sense, as it seems like a very case by case problem.

My only suggestion is to search Google Scholar. Sorry for not being overly helpful. Good luck.
 
  • Like
Reactions: mahaju

    mahaju

    Points: 2
    Helpful Answer Positive Rating
Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top