Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: open
A quick question please - does changing field size of a varchar field from 2000 to 3000 have any detrimental effects to the data, when working with SQL Server?
Im using Enterprise Manager to work with the database. In MS Access I can change up with no problems and lose data if changing down to less than the size of the data and also get a warning when im decreasing.
Can anyone whose done this on SQL Server give me some pointers. Oh and its a LIVE database with some fairly important data in it :-) (I know! - im just having to support it)
Decided to risk it... and it turned out that you can't have much more than 8000 bytes in a single SQL Server row anyway!
So it will require a proper redevelopment rather than a quick fix, which if it comes back to me will allow me to set up a testing environment.
In case anyone ever wants to know a solution is to use a separate table with just a primary key in it and link back the larger fields to the original table/s.
- Script out the table and in the script rename it to something new like [tablenameTEST] and then create that new test table.
- Use DTS to copy the LIVE data into the test table.
- Expand the table column to fit your needs
- Run some unit tests at it.
I have also had a similar problem to this. rather than copy the whole table I have created a new table with a varchar coloumn of 2000 added some data, then increased it to 3000 just to test the concept and it has worked, might save a bit of time.