OK so AIs are currently not really able to truly reason, but they can sure simulate it well. Many of them, anyway. So well that we come to expect reasoning ability. After all, isn't that the "intelligence" part of "artificial intelligence"? So when they spectacularly fail, it's really disappointing. (Or am I just expecting too much?) Let's go through an example of what happened: (see slides at the bottom) Slide 1: I gave it some instructions to edit a story in progress, and also asked for suggestions. Slide 2: For "suggestions", some of them are what I already asked it to do. In comparison, when I worked with ChatGPT on stories, it would give actual suggestions of things not yet reflected in the story, and also not incorporate them into the draft until after I gave the go-ahead. That's much more in line with what I feel a "suggestion" is. Slide 3: I tried again to solicit suggestions. This time it seemed to outright ignore my inst...
Anonymous restaurant reviews, cooking experiments, miscellaneous musings on living in the Lower Mainland, BC, Canada.