Actually, leebow's solution is safer than Lucy24's solutions for general use because it allows for internal braces.
For example, say your data looks like this
{block1}You can use a simple quantifier like \d{2} to say "match two digits" {/}
Leebow's solution will correctly match and return
block1 for the first capture group and
You can use a simple quantifier like \d{2} to say "match two digits" Lucy24's first solution will return
2 for $1 and
to say "match two digits" for $2
Lucy24's more restrictive solution will fail to match at all and will return nothing.
I am guessing the scream "Nooo!" is because general match and lazy operators are inefficient? On a modern regex engine, for a simple pattern like this, I doubt there's much efficiency difference between the negated character class and the lazy operator in terms of efficiency. It amounts to about the same thing.
As things get more complex, however, lazy patterns do indeed get expensive, like lookaheads and lookbehinds, and demand a lot of work from the regex engine. Effectively, a lazy pattern *is* a simple lookahead, which forces the regex engine to look ahead and backtrack, while a negated character class is a greedy expression, that gobbles up what it finds as it goes along until it hits a roadblock, then just moves on in the evaluation.
So what to do?
Option 1: unless this seems to be slowing things down dramatically, leave well enough alone.
Option 2: get really complicated with your regular expressions and use the regex engine to the fullest. In that case you'll end up with something like this
{([^}]*)}((?:[^{]++|{(?!\/}))*+){\/}
{([^}]*)} - matches the opening delimiter and captures the contents as $1
((?:[^{]++|{(?!\/}))*+) - this is the complex stuff
(?:) - creates a non-capturing group, meaning it groups for purposes like alternation, but doesn't capture.
++ and *+ create possessive matches, meaning match until failure, but once you fail, don't give up matched characters and don't backtrack. Backtracking is the expensive part of regular expressions, so by limiting that, we gain efficiencies.
(?!) - creates a negative lookahead, meaning, match X not followed by Y
So if we start putting those together, starting from the center, we have
1. (?:[^{]++|{(?!\/}) - non-capturing group that matches one or more characters (first +) that are not a { in a possessive match (don't backtrack on failure) OR match { not followed by /}. This is only non-capturing because if it weren't you would end up with a third capture group that would include the 2} forward in our string.
2. We wrap #1 in (*+) because we'll take as many of these matches as we can get and link them together, without backtracking
{\/} - matches your closing delimiter with no capture
Of course, your code is now almost impossible to read except by the most dedicated regex experts around. In my opinion, not worth it.
You can benchmark it if you want. I've benchmarked things like this in PHP and they tend to have tiny differences unless you are performing thousands of iterations. Rex over at the RexEgg site on regular expressions has benchmarked a very similar case and over 10,000 iterations, the differences were 300ms, which is to say less than one millisecond per iteration. This difference does increase with longer strings, but personally, I would need a very strong performance case to go down that road.
Still, it's fun to play with.
This regex tester will break it all down for you and you can see what happens when you change the regex or the string you're processing
[
regex101.com...]
And Rex explains as clearly as one can, his "explicit greed" technique
[
rexegg.com...]