Showing posts with label management. Show all posts
Showing posts with label management. Show all posts

Thursday, March 29, 2012

Drag and drop query onto Management Studio

Everytime I drag and drop a query file (.sql) from Windows Explorer onto
Management Studio, I am prompted to login. How do I get drag and drop to use
existing connection?
guy,
Install SP2 I believe. (At least it works for me.)
RLF
"guy" <guy@.hotmail.com> wrote in message
news:OiHKPyJqHHA.4872@.TK2MSFTNGP03.phx.gbl...
> Everytime I drag and drop a query file (.sql) from Windows Explorer onto
> Management Studio, I am prompted to login. How do I get drag and drop to
> use existing connection?
>
|||That"s the ticket. Thank you.
"Russell Fields" <russellfields@.nomail.com> wrote in message
news:OGnQpwRqHHA.3740@.TK2MSFTNGP02.phx.gbl...
> guy,
> Install SP2 I believe. (At least it works for me.)
> RLF
> "guy" <guy@.hotmail.com> wrote in message
> news:OiHKPyJqHHA.4872@.TK2MSFTNGP03.phx.gbl...
>

Drag & Drop Multiple Query Files Into Management Studio

In SQL Server 2000, you could have a Query Analyzer (QA) query window open to a particular database, and drag and drop a group of query files from Windows Explorer on it. QA would then open a query window for each of the files, and automatically default it to that database.

I am trying to get the same functionality in SQL Server 2005 Management Studio, but it nags me with the "Connect to Database Engine" dialog box for every file. Worse still, it will not default to the same database being used by the query window on which it was dropped. And to add insult to injury, it makes you click the options button to change to other than the default database. The behavior is the same in both the tabbed and MDI environment.

I have looked for settings that would affect this behavior, but didn't notice any. Does anyone know if this behavior is possible with 2005? If so, how to you implement it? Thanks.

Kevin

Did you find an answer to your problem? I have just started using SQL Server 2005 and I am having the same problem.

Thanks

Danny

|||

Even if I use "open file" to open saved queries I get asked each time to connect to the database engine. What am I doing wrong?

Thanks

Danny

|||Unfortunately I still have not found a way around the problem. I assume it's something we will have to live with.|||

One of the solutions would be to create a SQL Project with connection info and the related query files as members of a the project. This is similar to the VS project model

You can have multiple projects with different connection settings in a solution.
You may also be able to leverage the new SQLCMD internal commands to embed the connect statement within your scripts.

Thanks,
Ramesh

|||There must be an easier solution to this. Shocking!|||

Please raise a suggestion at the product feedback centre

http://lab.msdn.microsoft.com/ProductFeedback

|||Added a suggestion - http://lab.msdn.microsoft.com/ProductFeedback/viewFeedback.aspx?feedbackId=FDBK48385

Drag & Drop Multiple Query Files Into Management Studio

In SQL Server 2000, you could have a Query Analyzer (QA) query window open to a particular database, and drag and drop a group of query files from Windows Explorer on it. QA would then open a query window for each of the files, and automatically default it to that database.

I am trying to get the same functionality in SQL Server 2005 Management Studio, but it nags me with the "Connect to Database Engine" dialog box for every file. Worse still, it will not default to the same database being used by the query window on which it was dropped. And to add insult to injury, it makes you click the options button to change to other than the default database. The behavior is the same in both the tabbed and MDI environment.

I have looked for settings that would affect this behavior, but didn't notice any. Does anyone know if this behavior is possible with 2005? If so, how to you implement it? Thanks.

Kevin

Did you find an answer to your problem? I have just started using SQL Server 2005 and I am having the same problem.

Thanks

Danny

|||

Even if I use "open file" to open saved queries I get asked each time to connect to the database engine. What am I doing wrong?

Thanks

Danny

|||Unfortunately I still have not found a way around the problem. I assume it's something we will have to live with.|||

One of the solutions would be to create a SQL Project with connection info and the related query files as members of a the project. This is similar to the VS project model

You can have multiple projects with different connection settings in a solution.
You may also be able to leverage the new SQLCMD internal commands to embed the connect statement within your scripts.

Thanks,
Ramesh

|||There must be an easier solution to this. Shocking!|||

Please raise a suggestion at the product feedback centre

http://lab.msdn.microsoft.com/ProductFeedback

|||Added a suggestion - http://lab.msdn.microsoft.com/ProductFeedback/viewFeedback.aspx?feedbackId=FDBK48385

DR Suggestions Required

Hi All,

Need some suggestions for senior management for DR Purposes:

Background:

WSS/MOSS2007 is being used as a Document Management solution.

17 Servers geographically dispersed around the UK. Each server runs WSS 3, SQL Server 2005 and IIS. Each server is linked into a PiP cloud via 2MB MPLS.

At each location; We are looking at 20 core databases; each pre-sized to 10GB. If I take one site as an example, the previous nights backup totalled 135GB.

The company has taken a centralised view on backup's, so SQL Server Data and Log files are replicated using Double-Take to a central location where by the files are taken onto tape daily (Full backup of all files).

As a precaution, I take a Full SQL Server backup daily and also Tran Logs every 4 hours locally and keep it there for 2 days; however if the site goes boom I loose those, so for this purpose; please forget they exist.

As I expect; when I restore the mdf and ldf files from tape, I will get errors when I attach those files into SQL Server for transactional inconsistencies which I'm well aware of.

Other options I've considered are:

1) DB Mirroring. Not a bad option, but still have to get the DB to the Mirror Server in the first place. Also DB Mirroring is not recommended for more than 10 mirrored databases.

2) Log Shipping. Same issue as above; Have to get the data here in the first place. Then once Log Shipping is setup; if I have a failure; I need to start the whole lot off again.

3) Transactional Replication. Issue is with the initial replication getting the data from A to B, then if I need to use it in a DR situation; I will get issues saying this table is being used for replication. This can be worked around, but it's a not a quick process...

4) 3rd Party Backup Compression. E.G. Litespeed; Redgate SQL Backup, etc. Good; Tests have shown a 42% compression for us, however if I refer to the earlier example of 135GB, this compresses to 81GB. Throw in the theoretical max for a 2MB link of 19GB / 24 Hours, this would take 4 Days to copy.

Other thoughts I've come up with are:

A) Split the tables into different file groups; not sure how easy this would be as the DB's and Tables already exist.

B) Full/Diff/Tran. Still have the issue of scheduling the full backup over the weekend and taking 4 days to get here.

C) Local Tape Backups. Issue is relying on someone to change the tape on a daily basis. It's not centrally managed and how do we restore in a DR situation ?

Could someone give me some pointers please?

Thanks

Steve

SQL DBA.

I assume that the entire backup of all the dbs in one location is 135 GB........You can try log shipping where you can specify the option "generate a full backup of primary db and restore it in secondary" which will automatically create the db in secondary so that we need not copy the file manually..........out of those 20 dbs if you feel anything is extremely mission critical (very specific) you can configure mirroring for those dbs alone with automatic failover in high availability mode............in log shipping failover is manual........other dbs take a direct tape backup.......

You need to sit and split these 20 dbs as which one is very very critical, critical and also consider the size of those dbs.......you need to be sure if it grows enormously you can adopt a different strategy as log shipping or mirroring else tape backup/full backup/differential backup........

Let us get few more inputs from our friends in this community Smile

Rgds

Deepak

|||

So, my first thought is that DoubleTake isn't magic - they have to also get the data from site A to site B as well. How are they doing it?

If you're going to have a true DR site - one from which you could recover if your primary site became a smoking hole in the ground - you have no option but to somehow get the data replicated in a remote site. If you enable some sort of ongoing replication (mirroring, log shipping, transactional replication), then this is a one-time hit, and not an ongoing burden.

Sometimes if the links are not as fast as you'd like, the most efficient way to transfer data is "FFTP" - the Ford File Transfer Protocol. (put the tapes in the trunk of your Ford and drive them over...)

Certainly, compressing the backups will help with this, as the volume of data is reduced.

By the way, SQL Server 2008 will include backup compression as part of the native product.

|||

Thanks for your thoughts guys.

We are using Double-Take at the moment to get the data and log files here. We do have a one-time hit when we kick the replication set off for the first time on a new site.

We are using DT as our DR/User Recovery solution too. We have configured the Recycle Bin's in WSS/Moss, so we can cover that angle, so it's primary reason for replication is DR.

Bear in mind; each new site consists of 20 x 10GB Data (Pre Sized) and 20 x 5GB Log (Pre Sized) giving a total of 250Gb which needs to replicate. There is only 40mb or so on each db at this point, but Double-Take cant see this and so replicates the white space. If we were to do a SQL Backup; this would be around 400MB Per site which is not yet fully used.

We anticipate substantial volume growth in the next 12 months. We project 3.6TB for all sites once they are migrated onto the document management system, then anything upto 10TB+ spread over 17 servers. Can you see a DR Nightmate coming ?

We can and have tried shrinking the DB's to 5GB a piece, but then we get into disk fragmentation issues later on which is the reason behing the presizing .

I've thought about Database Mirroring from the off; we have 20 x 40MB Files, copy them over and then start mirroring; trouble is, all sites now have content. Also; I was under the impression that SQL Server could not support more than 10 mirroring sessions from any one source server?

The value of the data is mission critical. Essentially they are documents, tenders, spreadsheets, drawings, emails, etc. It's a document management system in essence with a custom UI.

One question I've thought of is this:

If we are using Log Shipping. Say we have a network blip and we loose the site for a couple of hours. Will those transactions queue and apply once the link is back or will the process fall over?

My experience of Log Shipping in SQL2K is that it would fall over and require a complete resync. Not a small task when you have 135Gb Content...

Thanks,

Steve

|||

I haven't seen that happen and we've lost connectivity to DR sites before. If you have network problems and loose connectivity for a couple of hours, all that should do is cause the transaction logs not to be copied over to the DR site. Once it's back, you can make sure all the logs get copied over from where the copies stoped, restore all the logs and "catch up" with where you were suppose to be. It shouldn't affect the database in the DR site as it would still be in a restoring state and able to apply more log backups.

-Sue

|||

Would it be possible to consolidate into fewer larger databases?

i.e. 20 10GB databases -> 10 20GB databases, etc.

This might make mirroring more feasible.

|||

Consolidating the DB's down is certainly an option; I had previously discounted this because I wanted to have a higher number of smaller DB's than a lower number of higher capacity DB's; This was purely for recovery purposes.

In your suggestion we would have 10 content databases, but would also need to mirror the SharePoint config database as this contains what's in what db table.

If I were to mirror 11 DB's; I dont know what effect it would have. Although the box's can run x64 code; that's a management nightmare in that each server woudl have to be rebuilt. The inconvienance it would cause the business is something which they are most likely to try and avoid.

Tuesday, March 27, 2012

DR Suggestions Required

Hi All,

Need some suggestions for senior management for DR Purposes:

Background:

WSS/MOSS2007 is being used as a Document Management solution.

17 Servers geographically dispersed around the UK. Each server runs WSS 3, SQL Server 2005 and IIS. Each server is linked into a PiP cloud via 2MB MPLS.

At each location; We are looking at 20 core databases; each pre-sized to 10GB. If I take one site as an example, the previous nights backup totalled 135GB.

The company has taken a centralised view on backup's, so SQL Server Data and Log files are replicated using Double-Take to a central location where by the files are taken onto tape daily (Full backup of all files).

As a precaution, I take a Full SQL Server backup daily and also Tran Logs every 4 hours locally and keep it there for 2 days; however if the site goes boom I loose those, so for this purpose; please forget they exist.

As I expect; when I restore the mdf and ldf files from tape, I will get errors when I attach those files into SQL Server for transactional inconsistencies which I'm well aware of.

Other options I've considered are:

1) DB Mirroring. Not a bad option, but still have to get the DB to the Mirror Server in the first place. Also DB Mirroring is not recommended for more than 10 mirrored databases.

2) Log Shipping. Same issue as above; Have to get the data here in the first place. Then once Log Shipping is setup; if I have a failure; I need to start the whole lot off again.

3) Transactional Replication. Issue is with the initial replication getting the data from A to B, then if I need to use it in a DR situation; I will get issues saying this table is being used for replication. This can be worked around, but it's a not a quick process...

4) 3rd Party Backup Compression. E.G. Litespeed; Redgate SQL Backup, etc. Good; Tests have shown a 42% compression for us, however if I refer to the earlier example of 135GB, this compresses to 81GB. Throw in the theoretical max for a 2MB link of 19GB / 24 Hours, this would take 4 Days to copy.

Other thoughts I've come up with are:

A) Split the tables into different file groups; not sure how easy this would be as the DB's and Tables already exist.

B) Full/Diff/Tran. Still have the issue of scheduling the full backup over the weekend and taking 4 days to get here.

C) Local Tape Backups. Issue is relying on someone to change the tape on a daily basis. It's not centrally managed and how do we restore in a DR situation ?

Could someone give me some pointers please?

Thanks

Steve

SQL DBA.

I assume that the entire backup of all the dbs in one location is 135 GB........You can try log shipping where you can specify the option "generate a full backup of primary db and restore it in secondary" which will automatically create the db in secondary so that we need not copy the file manually..........out of those 20 dbs if you feel anything is extremely mission critical (very specific) you can configure mirroring for those dbs alone with automatic failover in high availability mode............in log shipping failover is manual........other dbs take a direct tape backup.......

You need to sit and split these 20 dbs as which one is very very critical, critical and also consider the size of those dbs.......you need to be sure if it grows enormously you can adopt a different strategy as log shipping or mirroring else tape backup/full backup/differential backup........

Let us get few more inputs from our friends in this community Smile

Rgds

Deepak

|||

So, my first thought is that DoubleTake isn't magic - they have to also get the data from site A to site B as well. How are they doing it?

If you're going to have a true DR site - one from which you could recover if your primary site became a smoking hole in the ground - you have no option but to somehow get the data replicated in a remote site. If you enable some sort of ongoing replication (mirroring, log shipping, transactional replication), then this is a one-time hit, and not an ongoing burden.

Sometimes if the links are not as fast as you'd like, the most efficient way to transfer data is "FFTP" - the Ford File Transfer Protocol. (put the tapes in the trunk of your Ford and drive them over...)

Certainly, compressing the backups will help with this, as the volume of data is reduced.

By the way, SQL Server 2008 will include backup compression as part of the native product.

|||

Thanks for your thoughts guys.

We are using Double-Take at the moment to get the data and log files here. We do have a one-time hit when we kick the replication set off for the first time on a new site.

We are using DT as our DR/User Recovery solution too. We have configured the Recycle Bin's in WSS/Moss, so we can cover that angle, so it's primary reason for replication is DR.

Bear in mind; each new site consists of 20 x 10GB Data (Pre Sized) and 20 x 5GB Log (Pre Sized) giving a total of 250Gb which needs to replicate. There is only 40mb or so on each db at this point, but Double-Take cant see this and so replicates the white space. If we were to do a SQL Backup; this would be around 400MB Per site which is not yet fully used.

We anticipate substantial volume growth in the next 12 months. We project 3.6TB for all sites once they are migrated onto the document management system, then anything upto 10TB+ spread over 17 servers. Can you see a DR Nightmate coming ?

We can and have tried shrinking the DB's to 5GB a piece, but then we get into disk fragmentation issues later on which is the reason behing the presizing .

I've thought about Database Mirroring from the off; we have 20 x 40MB Files, copy them over and then start mirroring; trouble is, all sites now have content. Also; I was under the impression that SQL Server could not support more than 10 mirroring sessions from any one source server?

The value of the data is mission critical. Essentially they are documents, tenders, spreadsheets, drawings, emails, etc. It's a document management system in essence with a custom UI.

One question I've thought of is this:

If we are using Log Shipping. Say we have a network blip and we loose the site for a couple of hours. Will those transactions queue and apply once the link is back or will the process fall over?

My experience of Log Shipping in SQL2K is that it would fall over and require a complete resync. Not a small task when you have 135Gb Content...

Thanks,

Steve

|||

I haven't seen that happen and we've lost connectivity to DR sites before. If you have network problems and loose connectivity for a couple of hours, all that should do is cause the transaction logs not to be copied over to the DR site. Once it's back, you can make sure all the logs get copied over from where the copies stoped, restore all the logs and "catch up" with where you were suppose to be. It shouldn't affect the database in the DR site as it would still be in a restoring state and able to apply more log backups.

-Sue

|||

Would it be possible to consolidate into fewer larger databases?

i.e. 20 10GB databases -> 10 20GB databases, etc.

This might make mirroring more feasible.

|||

Consolidating the DB's down is certainly an option; I had previously discounted this because I wanted to have a higher number of smaller DB's than a lower number of higher capacity DB's; This was purely for recovery purposes.

In your suggestion we would have 10 content databases, but would also need to mirror the SharePoint config database as this contains what's in what db table.

If I were to mirror 11 DB's; I dont know what effect it would have. Although the box's can run x64 code; that's a management nightmare in that each server woudl have to be rebuilt. The inconvienance it would cause the business is something which they are most likely to try and avoid.

DR Suggestions Required

Hi All,

Need some suggestions for senior management for DR Purposes:

Background:

WSS/MOSS2007 is being used as a Document Management solution.

17 Servers geographically dispersed around the UK. Each server runs WSS 3, SQL Server 2005 and IIS. Each server is linked into a PiP cloud via 2MB MPLS.

At each location; We are looking at 20 core databases; each pre-sized to 10GB. If I take one site as an example, the previous nights backup totalled 135GB.

The company has taken a centralised view on backup's, so SQL Server Data and Log files are replicated using Double-Take to a central location where by the files are taken onto tape daily (Full backup of all files).

As a precaution, I take a Full SQL Server backup daily and also Tran Logs every 4 hours locally and keep it there for 2 days; however if the site goes boom I loose those, so for this purpose; please forget they exist.

As I expect; when I restore the mdf and ldf files from tape, I will get errors when I attach those files into SQL Server for transactional inconsistencies which I'm well aware of.

Other options I've considered are:

1) DB Mirroring. Not a bad option, but still have to get the DB to the Mirror Server in the first place. Also DB Mirroring is not recommended for more than 10 mirrored databases.

2) Log Shipping. Same issue as above; Have to get the data here in the first place. Then once Log Shipping is setup; if I have a failure; I need to start the whole lot off again.

3) Transactional Replication. Issue is with the initial replication getting the data from A to B, then if I need to use it in a DR situation; I will get issues saying this table is being used for replication. This can be worked around, but it's a not a quick process...

4) 3rd Party Backup Compression. E.G. Litespeed; Redgate SQL Backup, etc. Good; Tests have shown a 42% compression for us, however if I refer to the earlier example of 135GB, this compresses to 81GB. Throw in the theoretical max for a 2MB link of 19GB / 24 Hours, this would take 4 Days to copy.

Other thoughts I've come up with are:

A) Split the tables into different file groups; not sure how easy this would be as the DB's and Tables already exist.

B) Full/Diff/Tran. Still have the issue of scheduling the full backup over the weekend and taking 4 days to get here.

C) Local Tape Backups. Issue is relying on someone to change the tape on a daily basis. It's not centrally managed and how do we restore in a DR situation ?

Could someone give me some pointers please?

Thanks

Steve

SQL DBA.

I assume that the entire backup of all the dbs in one location is 135 GB........You can try log shipping where you can specify the option "generate a full backup of primary db and restore it in secondary" which will automatically create the db in secondary so that we need not copy the file manually..........out of those 20 dbs if you feel anything is extremely mission critical (very specific) you can configure mirroring for those dbs alone with automatic failover in high availability mode............in log shipping failover is manual........other dbs take a direct tape backup.......

You need to sit and split these 20 dbs as which one is very very critical, critical and also consider the size of those dbs.......you need to be sure if it grows enormously you can adopt a different strategy as log shipping or mirroring else tape backup/full backup/differential backup........

Let us get few more inputs from our friends in this community Smile

Rgds

Deepak

|||

So, my first thought is that DoubleTake isn't magic - they have to also get the data from site A to site B as well. How are they doing it?

If you're going to have a true DR site - one from which you could recover if your primary site became a smoking hole in the ground - you have no option but to somehow get the data replicated in a remote site. If you enable some sort of ongoing replication (mirroring, log shipping, transactional replication), then this is a one-time hit, and not an ongoing burden.

Sometimes if the links are not as fast as you'd like, the most efficient way to transfer data is "FFTP" - the Ford File Transfer Protocol. (put the tapes in the trunk of your Ford and drive them over...)

Certainly, compressing the backups will help with this, as the volume of data is reduced.

By the way, SQL Server 2008 will include backup compression as part of the native product.

|||

Thanks for your thoughts guys.

We are using Double-Take at the moment to get the data and log files here. We do have a one-time hit when we kick the replication set off for the first time on a new site.

We are using DT as our DR/User Recovery solution too. We have configured the Recycle Bin's in WSS/Moss, so we can cover that angle, so it's primary reason for replication is DR.

Bear in mind; each new site consists of 20 x 10GB Data (Pre Sized) and 20 x 5GB Log (Pre Sized) giving a total of 250Gb which needs to replicate. There is only 40mb or so on each db at this point, but Double-Take cant see this and so replicates the white space. If we were to do a SQL Backup; this would be around 400MB Per site which is not yet fully used.

We anticipate substantial volume growth in the next 12 months. We project 3.6TB for all sites once they are migrated onto the document management system, then anything upto 10TB+ spread over 17 servers. Can you see a DR Nightmate coming ?

We can and have tried shrinking the DB's to 5GB a piece, but then we get into disk fragmentation issues later on which is the reason behing the presizing .

I've thought about Database Mirroring from the off; we have 20 x 40MB Files, copy them over and then start mirroring; trouble is, all sites now have content. Also; I was under the impression that SQL Server could not support more than 10 mirroring sessions from any one source server?

The value of the data is mission critical. Essentially they are documents, tenders, spreadsheets, drawings, emails, etc. It's a document management system in essence with a custom UI.

One question I've thought of is this:

If we are using Log Shipping. Say we have a network blip and we loose the site for a couple of hours. Will those transactions queue and apply once the link is back or will the process fall over?

My experience of Log Shipping in SQL2K is that it would fall over and require a complete resync. Not a small task when you have 135Gb Content...

Thanks,

Steve

|||

I haven't seen that happen and we've lost connectivity to DR sites before. If you have network problems and loose connectivity for a couple of hours, all that should do is cause the transaction logs not to be copied over to the DR site. Once it's back, you can make sure all the logs get copied over from where the copies stoped, restore all the logs and "catch up" with where you were suppose to be. It shouldn't affect the database in the DR site as it would still be in a restoring state and able to apply more log backups.

-Sue

|||

Would it be possible to consolidate into fewer larger databases?

i.e. 20 10GB databases -> 10 20GB databases, etc.

This might make mirroring more feasible.

|||

Consolidating the DB's down is certainly an option; I had previously discounted this because I wanted to have a higher number of smaller DB's than a lower number of higher capacity DB's; This was purely for recovery purposes.

In your suggestion we would have 10 content databases, but would also need to mirror the SharePoint config database as this contains what's in what db table.

If I were to mirror 11 DB's; I dont know what effect it would have. Although the box's can run x64 code; that's a management nightmare in that each server woudl have to be rebuilt. The inconvienance it would cause the business is something which they are most likely to try and avoid.

Sunday, March 25, 2012

Download: SQL Server Management Studio Express

I have an instance of SQL Server 2005 running on a server. I know that Management Studio is available to me on the server through Programs.

What do I need on my client computer used for development to access the instance on the server? Is it as simple as a shortcut to the server Management Studio exe? Do I access it through Internet Explorer? Or what do I have to install on the client to be able to manager the server DB's?

Apparently this must be very easy because I've looked all over and can't find the answer. Thank you in advance.

Use the SQL Server installation CD, and choose the "Install Client Tools" options. At least you'll need to install client connectivity, and if you want the query editor, and other development tools, then you'll need the Management Studio.

I don't think it's just as simple as creating a shortcut link to the management studio EXE file on your server, you're best off installing it properly.|||

installed client components, as per description, client connectivity but there is no management studio only 2 tools: 'Surface Area Configuration' and '...Configueation Manager' - it doesn't realy help!

I'd like to have the management studio on client ... how can I get it?

|||Have a look at the following thread on MSDN's Channel 9. Others were having a similar issue.
http://channel9.msdn.com/ShowPost.aspx?PostID=142941

It appears that you may need to launch it separately from the Tools directory on your installation media.|||the reason: IE6SP1 was absent! after installation I could choose management studio from DVD/TOOLS. Thanks!|||Are the management tools available for Download?|||For the Express edition, you can download it here:
http://msdn.microsoft.com/vstudio/express/sql/download/

If you're using the Standard/Enterprise/Developer edition, it should be on your installation media (DVD > TOOLS directory)

Sunday, February 26, 2012

Don't have any Server to choose from the Server name list

Hi
I have just installed SQL Server 2005 Enterprise Edition. I have Microsoft
Windows 2000. When I start SQL Server Management Studio and have to choose a
server from the Server name list, the list is empty. I have no clue what to
do?
Please Help me!!!
Explain carefully
FiaType the name of the server in the Server Name field. If
it's an named instance, you would need to type
ServerName\InstanceName
-Sue
On Thu, 07 Sep 2006 11:04:03 GMT, "fiaolle"
<fiaolle@.telia.com> wrote:

>Hi
>I have just installed SQL Server 2005 Enterprise Edition. I have Microsoft
>Windows 2000. When I start SQL Server Management Studio and have to choose
a
>server from the Server name list, the list is empty. I have no clue what to
>do?
>Please Help me!!!
>Explain carefully
>Fia
>

don''t find Management Studio after installed SQL 2008 CTP JULY.

My OS is WinXP

I don't find Management Studio after installed SQL 2008 CTP JULY.

Why?

Thanks!

bill

I now it sounds like a super obvious question, but did you select to install Management Studio? Did you hit any installation errors?

Take a look at C:\Program Files\microsoft sql server\100\Setup Bootstrap\LOG\Summary.txt to see if there were any installation errors.

Cheers,

Dan

|||

Bill,

Since we haven't heard back from you in ~10 days I'm going to close this as answered. If you're still encountering problems with Management Studio feel free to post a new thread.

Cheers,

Dan

|||

Same problem I am facing. As per comment I have checked with log file and content are as below:

Microsoft SQL Server 2008 10.0.1049.14
==============================
OS Version : Microsoft Windows XP Professional Service Pack 2 (Build 2600)
Time : Sun Sep 23 13:07:25 2007

So please help me resolve this issue.

don''t find Management Studio after installed SQL 2008 CTP JULY.

My OS is WinXP

I don't find Management Studio after installed SQL 2008 CTP JULY.

Why?

Thanks!

bill

I now it sounds like a super obvious question, but did you select to install Management Studio? Did you hit any installation errors?

Take a look at C:\Program Files\microsoft sql server\100\Setup Bootstrap\LOG\Summary.txt to see if there were any installation errors.

Cheers,

Dan

|||

Bill,

Since we haven't heard back from you in ~10 days I'm going to close this as answered. If you're still encountering problems with Management Studio feel free to post a new thread.

Cheers,

Dan

|||

Same problem I am facing. As per comment I have checked with log file and content are as below:

Microsoft SQL Server 2008 10.0.1049.14
==============================
OS Version : Microsoft Windows XP Professional Service Pack 2 (Build 2600)
Time : Sun Sep 23 13:07:25 2007

So please help me resolve this issue.

don''t find Management Studio after installed SQL 2008 CTP JULY.

My OS is WinXP

I don't find Management Studio after installed SQL 2008 CTP JULY.

Why?

Thanks!

bill

I now it sounds like a super obvious question, but did you select to install Management Studio? Did you hit any installation errors?

Take a look at C:\Program Files\microsoft sql server\100\Setup Bootstrap\LOG\Summary.txt to see if there were any installation errors.

Cheers,

Dan

|||

Bill,

Since we haven't heard back from you in ~10 days I'm going to close this as answered. If you're still encountering problems with Management Studio feel free to post a new thread.

Cheers,

Dan

|||

Same problem I am facing. As per comment I have checked with log file and content are as below:

Microsoft SQL Server 2008 10.0.1049.14
==============================
OS Version : Microsoft Windows XP Professional Service Pack 2 (Build 2600)
Time : Sun Sep 23 13:07:25 2007

So please help me resolve this issue.

Dont Display dbo. in Object Explorer

Is there a way to stop SQL Server Management Studio (2005) from showing "dbo." on everything in the object explorer.

I'd love to turn that off on a database I'm working on as, for example, if you have to find tblMember, you'd have to type d-b-o-.-t-b-l-m before you can actually start jumping to the table you're interested in - whereas in Enterprise Manager (2000) you'd only have to type t-b-l-m if you get my meaning.

Hi there,

There no such option as to hide the owner of the datatable. Actually this is good because the owner is a very strong security feature SQL Server provides. You may want to have in mind who is the owner especially if you are going to host your application in a shared environment where all tables will be owned by your Sql user (which will definitely not be sa).
Sorry for bringing the bad news,
Cheers,

Andreas Botsikas

don't find Management Studio after installed SQL 2008 CTP JULY.

My OS is WinXP

I don't find Management Studio after installed SQL 2008 CTP JULY.

Why?

Thanks!

bill

I now it sounds like a super obvious question, but did you select to install Management Studio? Did you hit any installation errors?

Take a look at C:\Program Files\microsoft sql server\100\Setup Bootstrap\LOG\Summary.txt to see if there were any installation errors.

Cheers,

Dan

|||

Bill,

Since we haven't heard back from you in ~10 days I'm going to close this as answered. If you're still encountering problems with Management Studio feel free to post a new thread.

Cheers,

Dan

|||

Same problem I am facing. As per comment I have checked with log file and content are as below:

Microsoft SQL Server 2008 10.0.1049.14
==============================
OS Version : Microsoft Windows XP Professional Service Pack 2 (Build 2600)
Time : Sun Sep 23 13:07:25 2007

So please help me resolve this issue.