I have 2 questions. 1) is it necessary to redirect index.html or is it redundant because all pages are being redirected anyway? should I get rid of those 2 lines regarding index.html ? 2) is the code all good? do you see anything wrong with it? I just found these on different websites and combined them together but I dont really know what I am doing.
Copying old solutions is not a good way to deal with issues. Many of the bots you list have not been seen in a very long time and waste your server's time checking. Look at the User-Agents of named robots that actually do visit your site and you will have a much smaller list. UAs can be combined to be more efficient.
You don't mention why you don't want visitors to access your site except for one page, usually mass redirecting to one destination is not beneficial.
There are other issues, the syntax of redirects and bot blocking needs work. Overall you probably will want to start over, one part at a time, so you can understand your rules better and be able to maintain the file as it evolves.
If the [NS] version works, you don't need the with-condition version. If you have literal periods in your filepaths (like apache dot org with all those /2.2/ and /2.4/ directories) the rule needs to be a little more complicated; I've given the simplest version.
None of the .* in the UA list are necessary, since there are no anchors.
Any time you've got a list longer than 3 or 4 lines, put them in alphabetical order. Or numerical order or whatever is appropriate.
You can shave a lot of bytes by replacing "SetEnvIfNoCase User-Agent" by "BrowserMatch". Add "NoCase" only if it's appropriate for the specific entity you're blocking. Textbook example: "Googlebot" is the real thing. "GoogleBot" is a spoofer. Most robots stick with a particular casing.