No, not weird: The variables are differently-scoped in mod_rewrite versus PHP.
To get the URL-path only in mod_rewrite, use the pattern in the RewriteRule itself (preferred for efficiency), or use %{REQUEST_URI} in a RewriteCond.
To get the query string in mod_rewrite, use %{QUERY_STRING} in a RewriteCond.
To get the entire client request line in mod_rewrite, including URL-path, query string, URL-fragment, and request protocol, use %{THE_REQUEST} in a RewriteCond. Note that this is the entire request line sent by the client, exactly as it appears as a quoted string in your raw server access log file.
Which form you need to use depends on exactly what you are doing. If you do not use the query-stringed path as an internal script filepath, then the simple QUERY_STRING method should work. However, if you are internally rewriting the 'friendly' URLs back to a form that resembles the URL-path-plus-query, then that rule and this new one may together create an infinite loop, and you will have to use the THE_REQUEST method to avoid that.
Simple version of your added rule above:
RewriteCond %{QUERY_STRING} ^p=1$
RewriteRule ^([^/]+/[^/]+)/([^/]+)/([^/]+)/$ /$1/%3-%2.html [L]
But that does not comport with the stated goal in your first post which was to rewrite
URL-path /products/3/dvds/?a=10&b=name&c=desc to
filepath /products/3/dvds-10-name-desc.html
That would be answered by
RewriteCond %{QUERY_STRING} ^a=([0-9]+)&b=([^&]+)&c=([^&])$
RewriteRule ^([a-z]+/[0-9]+/[a-z]+)/$ /$1-%1-%2-%3.html [L]
assuming all-lowercase names and descriptions -- otherwise use [NC] flag. Also assuming that you want to rewrite all subdirectory-URL-paths -- "products" and all others matching the specified format...
Please notice that I have made the regular expressions sub-patterns much more specific. The rules I've posted may run dozens to hundreds of thousands of times faster than yours, because I explicitly define when each subpattern-match is to end. Your use of multiple ".+" subpatterns invokes a recursive loop, forcing the matching engine to try thousands of times to find a "best fit" apportionment of the requested strings among the multiple ambiguous subpatterns.
This is because on the first pass, the matching engine will match the entire string into the subpattern for "$1". It will then find that the rest of the pattern fails to match. So it will remove one character from $1 and try again. This continues until the rest of the subpatterns past $1 are no longer "starved", and it's easy to see that with a long requested URL, this may take many, many iterations.
In contrast, by using specific character-set-matches (e.g. [a-z]+ or [0-9]+) or negative-character-matches (e.g. [^/]+ meaning "match one or more characters not a slash"), the improved pattern can always be matched in a single left-to-right pass.
Jim