If my backup file has error so that the restore is failed, is there any way to repair the damage backup file?
(The restore procedure of 2005 report there is an error on page 65535:-1)
To the best of my knowledge, there's no repair option for a backup. But in 2005, you have the CONTINUE_AFTER_ERROR option for the RESTORE command. Of could, you will then have a database win some corruption which you need to handle.
There is a request to grant privilege to a particular user for viewing stored procedure source code for all databases. We searched the web and applied the following script: USE master GO GRANT VIEW ANY DEFINITION TO User1 Is there any way to check the view definition has been granted ? We have tried sp_helprotect for one of those databases but there is no action shown as VIEW DEFINITION. Your advice is sought.Check out the sys.server_permissions catalog view.
login failures in a table in addition to the sql log. is there any way to do this? i've created an alert for error 18456, login failed for user '%ls'. if i configure the alert to call a job, how do i get gets logged in, but where is that info if the login fails?Another option is to use a trace (or profiler) to monitor for failed logins. You can import the trace file into a table. The IP and Host name won't be directly available for failed logins. Host name isn't that reliable anyway as it's controlled by the client. For the ip address
trace (using sp_trace_create). I went onto one of my servers to fire off a trace and noticed that another one has been running for a few days. Is there any way to display the traces that are running on a particular server? I would like to stop the old trace but I can't figure out how Any help would be appreciated. Thanks in advance. TomIt figures - I was looking for this info all day yesterday. Right after posting this, I finally discovered it. The following
hi, everyone I have a table which already has over 7 millions records in SQL Server 2000, I drop an index of this table, but when I recreate the same index for this table. It will take long, long time because of the huge records. My question is if there is the better way to create the index for the long records without afftecting the performance of DB too much. or Disable the log file like what Oracle does? or something else?Why don't you followup with the response posted to the yesterday's version of the same question?I really did it by that way, but it is still too slow. I think maybe there is another way to do it.
are on 2005.Thank you for your response. Unfortunately, we are sill on 2000. Are there any alerts SQL that would capture this? Any way to write to the Event logs? I don't know when it will happen and can't trace, start a perfmon (administrative tools- Performance Object: MsSQL:Databases Counters: Data File Size KB, Log File Size KB Instances: Select the appropriate database(s) that you'd like to collect. A relatively low sampling interval (once per every minute or few ) should be enough to give you the data that you're looking for. Whenever any of these counters increase, you'll know that a file growth occured and the amount that the file was expanded. MS
Hi all. I have a database I need no auditor to be able to track changes made to it. As far as I know, the LDF file keeps a record of all the transactions performed on it which is exactly what I dont want. I've read that it is impossible to disable logging in MSSQL. Is this 100% true? Has anyone found a way to keep the logs file clear? Att, RODOLFOHi, We can not totally stop loggin in sql server. But if you select the RECOVERY model for your database as SIMPLE then transaction log will be cleared automatically Has anyone found a way to keep the logs file clear? If it is SIMPLE recovery log will be cleared automatically, but for other recovery model you need to perform the transaction log backup. See Backup Log command in books online. Hari SQL Server MVP
growing. Right now, I manually take the file, relocate it to another folder, then when the next differential backup file is written, I delete the large differential file and continue to let the new backup set grow, repeating the process as necessary. Is there a better way to control this process or at least or a DOS batch file. If you are using DTS, then you could opt for an ActiveX script task as well. Also, read up on WITH INIT option of BACKUP command and see if that is of any use to you. -- HTH, Vyas , MVP (SQL Server) http://vyaskn.tripod.com/ Is .NET important for a database professional? http://vyaskn.tripod.com/poll.htm With my maintenance plans for backup jobs, I can control how long the backup files will
the end. Unfortunately this still returns the unique constraint violation even though I am doing the RETURN(0). There doesn't seem to be any way to keep the unique constraint from going through to ADO and reporting back an error. The procedure has now been changed to check for the unique problem before performing the insert but that seems
have an apparently deliberate delay between auto-grow events? Surely it can grow the log file and initialise it in multiple chunks together, or even not have to wait between chunks? If the insertion causes 10 auto-grow events, that's 10 seconds longer it a database transaction log? I have tried using an ALTER DATABASE, MODIFY FILE to set a new size. This works, but only temporarily. The problem here is that my database is using the simple recovery model, and has AUTO_SHRINK switched on. Therefore, I find size it was initially created with. Very annoying - I want it to return to a more useful size! Is there any way to configure auto shrinking to return the database to a specific size (such as you can with DBCC SHRINKFILE)? I thought about using sp_detach_db and