Ground-truth answer is missing for the training/validation set
Great job! I'm trying to use this dataset. However, I've found that there seem to be no ground-truth answers in either the training set or the validation set. I originally thought that the labels had been arranged in the "correct" order, but after carefully examining several samples, I realized that wasn't the case.
For example, for this sample
Clues:
[ "Charles's expedition will leave 1 month after Alvin's team.", "The expedition leaving in April will include Naomi.", "Bob's expedition will leave 1 month after Janice's team.", "The four teams will be Bob's expedition, the expedition leaving in January, Naomi's expedition and Traci's expedition." ]
And the labels seem not satisfy the clues.
- Months: "January", "February", "March", "April"
- Chiroptologists: "Alvin", "Bob", "Charles", "Spencer"
- Speleologists: "Connie", "Janice", "Naomi", "Traci"
So I was wondering if you could provide the answers for the training set and the validation set. I would be extremely grateful!
My initial idea was that it’s an unsupervised dataset and I’d generate a solver for each problem. There were no golden answers at the source of these puzzles. And then I noticed that they are following the same pattern and generated from 20-30 templates. So it could be useful for tests but not fine-tuning.
These puzzles have basically been “solved” by o3-mini, so it should probably be deprecated at some point.
Thank you for your quick response. And I appreciate the effort you’ve put into collecting and sharing it—it’s been an interesting resource to explore, even if it’s unsupervised.
BTW, I'm very interested in learning more about the solver you generated for these puzzles. Could you share some details on how it works? Additionally, would it be possible to open-source the solver? I think it would be very helpful for better understanding and utilizing this dataset.