mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

356K
active users

Over the last few days, I have been trying out a side-by-side comparison of and (paid pro version for both).

My use case is primarily background for work and personal topics. I use to help me identify relevant information and sources for topics and then summarize the resulting large amount of information. I also use Ai to help expand my thinking and identify other potential areas of inquiry related to a topic that I might not have considered.

My use is an even 50/50 mix of using each service via web and mobile.

For about a week I have used both ChatGPT and Perplexity side by side, pasting every query into both tools and comparing the results results. I am using their subscription services ($20/month for each). My use has included a couple of days of using ChatGPT's most recent update and the new ChatGPT 4o model in both apps.

I considered how each tool performed in specific domains and rated each domain according to which tool I thought did better (for my particular use case).

💬 = ChatGPT excelled at a particular cargoes.
✳️ = perplexity excelled for a particular category.
👔 = indicates a tie (either they both did similarly good or bad or it’s a toss-up depending on someone’s individual needs)

Again, my use case is about needing large amounts of complex information deciphered and sourced, but I also have some more general use cases. My ultimate goal in this exercise was to try and figure out which tool worked better for me and is therefore worth my $20/month (because I am not going to continue to pay for both!).

Overall responses. In general, the responses from ChatGPT were shorter and more succinct. Perplexity responses consistently contained more detail, noting exceptions and other noteworthy information. I think which is better depends on individual needs. For my particular use case, however, Perplexity wins. ✳️

Accuracy. Both ChatGPT and Perplexity sometimes hallucinated and gave bad responses. For example, I asked about a specific federal limitation on something and ChatGPT was dead wrong. Another time I asked both to summarize an OMB memo and Perplexity seemed to be summarizing a completely different OMB memo.

They were rarely both wrong at the same time which indicates that it's something about the underlying software differences that resulted in incorrect responses. When one got something wrong, the other generally got it right. ChatGPT did seem to flub on simpler requests, though. Regardless, it definitely demonstrated that these tools cannot be relied on as the final answer and are merely a jumping off point, much like any search engine. 👔

Follow up questions. After answering a prompt, Perplexity will generate some follow up questions that you can click on to ask. When doing research this is incredibly helpful. ChatGPT does not do this which is unfortunate. ✳️

News. Each day I entered a query asking both Ai tools to give me the top headlines in the U.S. They consistently gave me very different results. ChatGPT’s results tended to have more tech and Ai related headlines and frequently nothing on politics or actual major news stories. Perplexity seemed to be more in line with actual big news stories in the U.S. and around the world. ✳️

Image generation. No contest, ChatGPT wins here. Perplexity can't generate images just from prompts. On the web (or in Safari on iPhone, but not in the mobile app) you can get it to generate an image using Dall-e related to a query you’ve made, but otherwise no image generation. This is a bummer. 💬

Feature set. ChatGPT is like a Swiss Army knife, mostly thanks to its GPT store. There is a GPT for just about anything and I definitely have made use of some of these. For example, there’s GPTs for teaching and solving math problems, converting files, answering medical questions, and putting together charts. While Perplexity offers some customaization capabilities (like being able to focus on YouTube or academic papers), it doesn't come close to what ChatGPT can do. 💬

Organization. Both apps allow you to easily search your past queries, but only Perplexity provides the ability to organize queries into collections. I find this really helpful since I am often working on searches that related to specific topics of interest. My ChatGPT results quickly became very messy. ✳️

Drea

Exporting chats. Perplexity exports are nice, especially if they are going into a Markdown file (like Obsidian, which I use). When I copy and paste results from a Perplexity thread, it is neat and the footnotes are automatically created and linked. ChatGPT's exports are not as neat and often get a bit jumbled when links are included. ✳️

Summarization. ChatGPT generally did a better job at succinctly summarizing information. It was good at getting a point across in fewer words in a more concise format. 💬

Title or headline creation. I gave each tool some text and asked it to come up with a title or headline. I think ChatGPT performed a better here, generating more succinct and relevant headlines that just sounded better. 💬

Understanding my request. Perplexity generally did a better job understanding my queries. For example, I asked each what a certain data set was (wanting to know the purpose behind the data collection effort). ChatGPT interpreted this to mean I wanted the results of the data set, whereas Perplexity understood that I was asking about the data set and collection, not the results. ✳️

Cover letter exercise. I told each tool to write a cover letter for a certain position. ChatGPT immediately spit out a good but generic cover letter with placeholders to fill in company name and other details. Perplexity, however, asked follow-up questions (name of the company, job responsibilities, and details about my own experience) and generated a much more personalized and detailed letter. I was impressed with how good the follow-up prompts were and the overall results. ✳️

Sourcing and Citations. This is, for me, the biggest area where Perplexity excels. In most cases I have to specifically ask ChatGPT to provide linked sources, and even then it is often just in a few places. Perplexity, however, automatically provides citations for each statement it makes. It also lists the sources at the top of each chat, allowing me to quickly scan the sources (presented in neat little cards) to see where it got its info from. ✳️

Number of sources. Perplexity consistently cited and pulled from more sources when asked questioned. Where ChatGPT often only cited 2-3 sources at most, Perplexity often had 6 or more sources. ✳️

Quality of sources. It seemed to me that ChatGPT often accessed less reliable sources to respond to a question, such as questionsble news sources or websites. Perplexity, on the other hand, consistently accessed more reliable and reputable sources such as government websites, research foundations, and peer-reviewed literature. ✳️

Nuance. ChatGPT answered questions in a more concise and straightforward way but failed to touch on important nuance around an issue. For example, I asked both about a federal cap on a certain activity. Both cited what the government set as the cap, but only Perplexity then elaborated that a recent report had called this cap into question and the federal government was going to be issuing further guidance on the matter. ✳️

Table generation. I asked both tools to generate a table detailing a federal award timeline. Using the same prompt, Perplexity provided much more detail in the table and did a neat job citing the source of each row. ChatGPT just generated a four line table and did not provide sources. ✳️

Peer reviewed research. When asked to identify recent peer-reviewed research, ChatGPT came back with only 3 studies. Perplexity identified 13. I had similar returns for other topics. When asked to summarize the peer-reviewed research Perplexity was more thorough in capturing the key findings; ChatGPT focused on being more concise.

I asked both to identify recent peer-reviewed studies on implementation of a federal program. ChatGPT just identified 3 studies whereas perplexity identified 13. In summarizing the published research, perplexity did a much more through job, where ChatGPT focused on being more concise.✳️

Overall seems to be the better fit for my needs and it’s probably the app I will go with for now. In order to use for my daily needs it would have to vastly improve in terms of its citation abilities and number of sources referenced and cited.