I decided to go ahead and run the command line version to see what it would do. Checking the directory where I had the files saved that I was using for this test, I noticed there were no files generated by Festival, even though it was the current directory in YASC (and later in Command Prompt, I CD'd to that directory before running Festival). I stopped the solver and then started again with the next puzzle in the file. I wasn't gone for 11 hours, so that was my first indication that something was amiss. When I returned (about 7 or 8 hours later), Festival was still working on Level 5 and showed an elapsed time of almost 11.5 hours. I had to leave for most of the day, so I left it running in YASC to finish Atlas 2 unmonitored. In fact, so much so I just dove into Atlas 2 expecting similar results. Anyway, Atlas 1 was so smooth it impressed me. I've never been too keen on using command prompt because of all the lengthy path names I have to type in. Anyone knows in which programming language Festival is written? J-PÄ«ackground: I first installed the plugin in YASC using the installer file that was provided in the zip archive. maybe Festival is the new standard.? I don't know. From the discussions I had with Matthias, Sokolution seemed to have the 'lead' until recently. On big brain breakers, maybe some will do a better job than others. If it's a small level, I guess any of the above mentioned solvers will do the job. Needing a solver means you're stuck upon a 'brain breaker'. It just depends, what kind of levels you like to play and for what kind of levels you would like to have the help of a solver. This is why benchmarking solvers on a 'Large Test Suite' is certainly a better indication but not a definitive estimation of the value of each solver. When the levels get bigger, solvers lose their advantage over the human brain because they cannot "see" the strategy of parking boxes on intermediate positions. On the other hand, solvers are really good on such levels because they can quickly test millions of positions!!! Usually (nearly all the time), solvers can do much better than I do on such small brain breakers. Some small levels are tough brain breakers for a human brain, because the solving is made by numerous "try and fail" attempts. What I'm trying to mention is that benchmarking is kind of comparing pineapples and carrots. I'm not trying to say that the Disciple level set is a good benchmark!! It would be vain and stupid. Skinner) has 36+26+31+37+46=176 levels solved by all the different solvers!!! and this is amazing when you consider that Festival achieves 92 % of success On the other hand, the same solvers solve only 27+12+10+18+35=102 levels of Disciple with a best score of 70% by Festival. Evgeni Grigoriev who is a respected author (his levels are never easy to solve) gets those amazing results on Grigr 2001: 94, 93, 92, 96 and 96 % of success by the considered solvers !!!! Sasquatch - a highly considered level set - (by the EXCELLENT David W. O'Reilly) - an excellent level set - the ratio of success ranges from 74 % to 100 % (Festival) with other results reaching 95, 86 and 86 %. However, 70 % is still a low percentage of success when you compare the results that all solvers achieve on famous level sets by famous authors: On Kevin1 (K.B. Now, the last Festival solver (which I personally don't know) reaches the best score with 70 % (35/50). Yass and JSoko - which are good solvers - only respectively 12/50 and 10/50, which is really a low number of successes! Sokolution (another excellent solver) only reaches 36 % (18/50). It is quite remarkable that Takaken only solves a little bit more than 50 % of them (27/50). Hi everyone! I just want to mention the Disciple level set by Crazy Monk (me and Matthias as authors).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |