TCXT is a section of our blog where our Director of QuestionPro Customer Experience reflects and shares his thoughts on everyday moments and their relationship to customer experience.
Want to hear Ken Peterson’s thoughts directly? Watch the video summary here:
In November 2023 during the early days of Generative AI and AI prompting, there was certainly some learning that had to be done to “get the right answer”. Some of the nuances that come along with the subject of prompting were certainly fresh and new for everyone. Even with our early entry into working with this topic, our QuestionPro AI Survey Builder sometimes left a little bit more to be desired from the generated outputs. This was born of two consequences:
- The version of the LLM such as the range content that it could understand based on how well it was trained
- The prompts provided in the query, which goes along with how it understood the natural language and the theme behind the query
Now when we move forward to today, there have certainly been a lot of improvements in the prompt engines in the LLMs as well as many variations. So now it can be relevant in considering which large language model that you are going to use and how it fits your needs the best.
Also consider that there are quite a bit of publicly branded LLMs out there that will promote themselves as being better than one or the other, and differentiating between them can still be a challenge. But certainly there is no doubt that LLMs have improved greatly over the years since the first OpenAI prompting mechanism was introduced.
However, that other challenge is still there. The processing of prompts and workflows on LLMs have accelerated in sophistication, but I might argue that the user prompts have degraded. Frequent users have probably found ways to improve upon their prompts and early adopters – the ones that are probably a little more tech savvy than others – are likely far better than new users in how the prompt is thought about and structured.
However, when considering the entire set of prompts submitted in 2026 versus 2023, the overall quality of those prompts have declined. This is really a volume issue as the early users, rising from 100 million active daily users in 2023 versus 800 million in 2025. Those with less experience than the early adopters – which are closer to 85% of users, are still treating it like a Google search. The content that is retrieved today is certainly going to be better with the backend engine, but it could still be better with the right prompt.

For those early adopters, they have a few years of picking the right model and knowing how to best phrase prompts to get the best out of them LLM. The new wave of LLM users may not always get the right prompt and may not be experienced enough to validate the outputs. They may improve over time, but until AI is saturated into our daily lives, there will always be the experts and the newbies.
The reason that is relevant, is that when we started looking at our QuestionPro Customer Experience platform, and our first AI tool in our QuestionPro AI Survey Builder. Released in December of 2023, it was an accelerated development that helped individuals more quickly produce a survey about a specific topic. It still allowed for human interaction to confirm the content (AI is not perfect afterall, because the inputs into the LLM are not perfect), but the ultimate goal was efficiency.
Edit questions or scales rather than think of the questions, spend time inputting them, then worrying about what is next. Instead, edit and then move to the next steps. In this situation, the open prompt is a requirement, but it also left quite a bit of leeway into what was produced. One could get twenty questions instead of ten if that was not specified. A prompt asking about an “automotive experience” could return results for sales experience, service experience, vehicle quality or even a combination of the three. That prompt is critical.
We see many AI tools that promise the world, but what will always hold them back is the quality of the prompts from users. The users of the QuestionPro platform are sophisticated and I do not doubt the veracity of their prompts, but in Customer Experience, you may be rolling out usage to 10’s, 100’s or even 1,000’s of users. That is why, when you start looking at the QuestionPro and QuestionPro Customer Experience platforms, our AI tools look like a “button”, but it is really pulling outputs based on the best structure in prompting.
It may look like a simple tools, but we are bringing a best in class LLM and a concise prompt to give our users – especially frontline users – a simple, yet accurate way to get the information they need to best serve the customers’ needs. Improving efficiency and communication, but with guardrails so you do not accidentally give the customer incorrect information.
Over the next few weeks, I will feature some of the unique AI features we have in our QuestionPro Customer Experience platform, not as a promotion, but as thought-starters that will allow you to think about how you are leveraging AI within your own organization to improve the customer experience – and it all starts with the prompts.
If you would like to learn more, please do not hesitate to reach out to me and schedule time, I’m always excited to talk about the future.



