There isn't really a limit on the bit vector other than memory constraints. The main limitation for the sieving process is the size of the index variables. If you wanted to go larger, replace the int with a 64 bit quantity. I think MSVC allows __int64 as a data type. (Or something similar to that).
I guess I should mention that I didn't try sieving beyond 1 billion since overflow errors should start to cause the algorithm to fail probably somewhere between 1 billion and 1.5 billion. (Probably at around MAXINT / 3, give or take 1 or so). My reasons for this is when a prime p is found, it starts crossing off composites with that prime as a factor starting at p^p and incrementing by 2p. If Adding 2p overflows the index size, it won't terminate because the index wraps around and is less than the max, and then it'll probably start knocking out smaller primes and think they're composite. This could happen once (sqrt(max) ^ 2) + 2*max > MAXINT, which is about 3*max > MAXINT.
Mind you generating the list of primes in 32 bit int format from the bit vector wastes a lot of memory. You'd probably want to get rid of that part and just return the bit vector if you wanted to go much larger. Replacing the unsigned ints with unsigned __int64 should increase the limit up to about 2^64/3 (which is about 6,148,914,691,236,517,205) and would require about 96,076,792,050,570,581 bytes of memory for the bit vector.
I think you'd have to rewrite the algorithm to work in iterative chunks well before the 64 bit index limit was exceeded to avoid running out of memory. Not to mention the expected run time to produce all primes up to 2^64/3 is well over thousands of years. (A really rough estimate assuming the asymtotic run time is linear, which is probably a low guess, is over 32000 years on my computer).