![]() Csilla's 8 variable, 99 target system should be the minimum complexity in my (not very) humble opinion. You can do a two variable system by hand: the optimization algorithms should be judged by systems that are not capable of being solved in closed form. I think Tom's two-variable, 5 target optimization is too simple. You'll get three different results, and you'll have no way of predicting in advance which run will be best :-)Ģ. Try repeating any of the Hammer or Global tests three times, with the same local optimizer and starting conditions. Hammer and Global both use extensive randomization methods, and no two runs of these give exactly the same results or take the same amouint of time. Only compare Local optimizers against each other. I'd like to suggest a couple of things for testing:ġ. In contrast, the improvements made in the previous release, 20.3, to the optimizer improved optimization for all systems across the board, no questions asked :-) The problem I have with the new optimization methods as implemented is that there is no way to know in advance which one works best, at least from the perspective of an end user who is not running the raw code in Run mode inside Visual Studio. The problem is that it worsens all the others! The existing values of those parameters was a compromise of results over a batch of test cases. We've known for a long time that if you just fiddle with adjustable parameters in the code (not available to the end user) that you can improve the optimizer for any specific file. I have a couple of thoughts on this, but it all comes down to the issue of 'when do you know to use what method?'. I'll be sharing my findings in this forum thread should I find anything of interest :) I'd be curious and interested to learn about more findings and feedback from users as to when those algorithms are most useful. The effect of incorporating pseudo-second derivatives can be faster convergence near the optimum, at least in principle. The idea to use the pseudo second derivative is derived from Don Dilworth’s papers, but the implementation here which relies on DLSX for the bulk of the optimization trajectory is unique to OpticStudio. This is a modification of DLSX that uses a pseudo-second derivative when close to a local minimum to accelerate the ramping down of the damping factor during the optimization. This method could be useful when the merit function is smooth, and the user wants a result as quickly as possible without worrying about some of the refinements that the Damped Least Squares method provides. DLSX also uses a somewhat different damping schedule and a more aggressive line-stepping algorithm. ![]() ![]() For example, this method will not restart in “Automatic” cycle mode whereas Damped Least Squares sometimes will. This is similar to our existing Damped Least Squares method but streamlined in some respects. So far, this is my understanding from discussing with our developers: Happy new year! I hope you are doing well!Īs you know, those new algorithms are in ''feature experiment'' which means that there is still work being done on it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |