It depends on the search structure.
Most humans are afraid to type directly into their browser's address bar-- if they even know what it is-- so instead they go to Site Search and say "gimme some articles about 19th-century widgets". And then site search, which may not really be a search at all, converts the request into /widgets/century/19/ without first checking whether such a page exists on the site. And then the cms does some clanking and churning and spits out a page whose sole content is "we can't tell you anything about widgets in the 19th century".
All those http responses were codified long before-- that is, ahem, "long" in Internet terms-- it became commonplace to generate pages first, put in the content second. So you could have a site where as far as the server is concerned, no request ever meets anything but a perfect 200. But a search engine wouldn't be doing its job if it didn't distinguish between real pages with real content, and pages that were created because the developer forgot to code for bad requests.