Showing posts with label values. Show all posts
Showing posts with label values. Show all posts

Thursday, March 29, 2012

Drawing X & Y axis line on Scatter Chart where cross at: 0

I have a problem with a scatter graph.

The x values range from -80 to +40
The y vaules range from -6000 to +9000
Both need the axis to cross at 0

No gridlines are required on the graph apart from the main axis'. Which is where I'm struggling. The axis lines aren't drawn for the main axis points. However if I try to use 'gridlines' I end up with lines in other parts of the graph too, which is undesired.

How can I get the axis' to have their line drawn?

Thanks,
David (HN)

Did you try setting the axis major interval to a very large value (e.g. 10000)?

-- Robert

Draw Axis on Barchart

Hi,

I have a little problem with my barchart.
The chart shows negative and positive values and i want a simple vertical black line at 0.
But I havent found any solutions for this problem yet that satisfies me.

What I tried:
- Draw a normal line and place it above the graph -> doesnt work in all renderers
(Overlapping Reportitems are not supported ....)

-with the parameter "Cross At"
i have labels on the axis and if i "cross at" 0, the labels also move into the middle and overlap
with the bars/point labels

- with Gridlines (so far the best solution)
Chart with a fixed range (for example -100 to +100) and set the gridline interval to 100
this prints 3 Gridlines, 1st at -100, 2nd at 0 (what I want) and a 3rd at +100 - that would be ok for me
BUT i need a variable chart range!
so if the chart range goes from -100 to +200, i have a gridline at +100 crossing all bars !!

here are some pictures of my problem

what i want:
http://www.hmayer.net/show_image.php?id=4387&scaled&scalesize=800&nocount=y

what i got:
http://www.hmayer.net/show_image.php?id=4386&scaled&scalesize=800&nocount=y

any other ideas?

thanks,
GerhardYou can accomplish this with the following expression, if you are willing to have the min and max for the axis be the same absolute value. The expression calculates the largest value, in absolute terms, and sets the min(with negation), max and major interval properties on the Y-axis to this expression.

Min:

=-Math.Max(Max(Fields!FieldName.Value), Math.Abs(Min(Fields!FieldName.Value)))

Max:
=Math.Max(Max(Fields!FieldName.Value), Math.Abs(Min(Fields!FieldName.Value)))

Major Interval:
=Math.Max(Max(Fields!FieldName.Value), Math.Abs(Min(Fields!FieldName.Value)))

Note: Expressions for these properties are not su supported in RS 2000

Another approach would be to set the Cross At to 0 and not use tick marks or axis labels on the axis. Instead of using the labels, you could use a table placed beside the chart that contains these values. It may be tricky to get the labels line up correctly--make sure to sort the category group and the table the same way.

Ian

Sunday, March 11, 2012

Double Inner Join

I have a table where 2 field in a table which are linked to 2 other different tables.

I need to retrieve values from the 3 tables.
Can I use twice the inner join ? like :

select f_tb2,f_tb3,* from tb1
inner join tb2 on tb2.f_id=tb1.an_id and
inner join tb3 on tb3.f_id=tb1.another_id
etc...

Or something like that ?

Or any other method ?

Thanks for help.Originally posted by Plarde
I have a table where 2 field in a table which are linked to 2 other different tables.

I need to retrieve values from the 3 tables.
Can I use twice the inner join ? like :

select f_tb2,f_tb3,* from tb1
inner join tb2 on tb2.f_id=tb1.an_id and
inner join tb3 on tb3.f_id=tb1.another_id
etc...

Or something like that ?

Or any other method ?

Thanks for help.

Yes you can join many columns in a column to any number of tables. Frequently there will be more than one column in a table which needs to correspond to other tables.

When you start talking about outer joining a table to more than one table is when you may run into limitations, but in this case you will definitely be able to do that :)|||yes you can do it, with one correction and one suggestion:

select f_tb2, f_tb3, tb1.*
from tb1
inner join tb2 on tb2.f_id=tb1.an_id and
inner join tb3 on tb3.f_id=tb1.another_id

correction: remove the word "and" from between the joins

suggestion: qualify the asterisk, otherwise it returns all columns from all tables|||here is a rather large query we use regularly with multiple joins:

select item.item,
Item.description,
LocInv.Item_Desc as LIDesc,
WOH.Item_Desc as WOHDesc,
WOD.Item_desc as WODDesc,
WI.Item_desc as WIDesc,
BOMD.Item_desc as BOMDDesc,
BOMH.Item_desc as BOMHDesc
from Item with (nolock)
inner join Location_inventory LocInv with (nolock)
on Item.Item = LocInv.Item
inner join Work_Order_Header WOH with (nolock)
on Item.Item = WOH.Item
inner join Work_Order_Detail WOD with (nolock)
on Item.Item = WOD.Item
inner join Work_Instruction WI with (nolock)
on Item.Item = WI.Item
inner join Bill_Of_Materials_Detail BOMD with (nolock)
on Item.Item = BOMD.Item
inner join Bill_Of_Materials_Header BOMH with (nolock)
on Item.Item = BOMH.Item
where (Item.Description <> LocInv.Item_Desc and Item.Item = LocInv.Item)
or (Item.Description <> WOH.Item_Desc and Item.Item = WOH.Item)
or (Item.Description <> WOD.Item_Desc and Item.Item = WOD.Item)
or (Item.Description <> WI.Item_Desc and Item.Item = WI.Item)
or (Item.Description <> BOMD.Item_Desc and Item.Item = BOMD.Item)
or (Item.Description <> BOMH.Item_Desc and Item.Item = BOMH.Item)

Wednesday, March 7, 2012

Don't use dateadd

I've got a reasonably efficient query that gives me a count of the top 20 daily values in my database. Now I'd like to figure out the daily total of top 20 values. Then analyze this information to work out the average, min, max and standard deviation of the daily total of top 20 values.

My best effort is horribly slow - does anyone have a better idea how to do this?

Thanks!

The schema of the database it accesses:

create view eventView (timeStr, msec, host, process, dbName, point,
description, rtu, groupName, message, type,
sevInt, time)
as
select dateadd(second,time+60*offset,'01/01/70'), msec, host, process,
dbName, ptName, description, rtuName, groupName,
message, type, sevInt, time
from summary

CREATE TABLE [summary] (
[msrepl_tran_version] uniqueidentifier ROWGUIDCOL NOT NULL CONSTRAINT [DF_Summary_GUID] DEFAULT (newid()),
[time] [GMTtime] NOT NULL ,
[msec] [int] NULL ,
[offset] [GMToffset] NOT NULL ,
[type] [nameType] NULL ,
[host] [nameType] NULL ,
[process] [nameType] NULL ,
[dbName] [nameType] NULL ,
[ptName] [nameType] NULL ,
[description] [descType] NULL ,
[rtuName] [nameType] NULL ,
[groupName] [nameType] NULL ,
[message] [msgType] NOT NULL ,
[fgInt] [tinyint] NULL ,
[sevInt] [tinyint] NULL ,
[key1] [int] NULL ,
[key2] [int] NULL ,
[spooler] [tinyint] NULL
) ON [PRIMARY]
GO

My Top 20 query:

SELECT TOP 20 count (*) as "Number of Alarms", [point], [description]

FROM [event].[dbo].[eventView]
WHERE ([timestr] < left(getdate(),11) and [timestr] >= left(getdate() - 1,11))
GROUP BY point, description HAVING count(*) > 1
ORDER BY "Number of Alarms" desc

And the messy, slow meta query:

declare @.myDay datetime
declare @.begDay datetime

declare @.myTable
table(Alarms int, Point varchar(250), Description varchar(250), Before datetime, After datetime)

declare @.myDaily
table(Date datetime, Alarms int)

select @.myDay = left(getdate(),11)

select @.begDay = left (convert(datetime, '12/01/2006'), 11)

while @.begDay <= @.myDay
begin

insert into @.myTable
SELECT TOP 20 count (*) as "Number of Alarms", [point], [description],@.begDay
FROM [event].[dbo].[eventView]
where ([timestr] < @.begDay and [timestr] >= dateadd(day,-1,@.begDay))
group by point, description
having count(*) > 1
order by "Number of Alarms" desc

select @.begDay = dateadd(day,1,@.begDay)

end

--

insert into @.myDaily
select After as "Date", sum(Alarms) as "Alarms"
from @.myTable
group by After
--

select count(Alarms) as "Count", avg(Alarms) as "Average", max(Alarms) as "Maximum", min(Alarms) as "Minimum", stdev (Alarms) as "Standard Deviation"
from @.myDaily

As I know you can improve performance if you make some change in these places.
1. avoid using function in your where clause.
2. create index in summary table.
3. I am not sure why you have to use dateadd(second,time+60*offset,'01/01/70'), in eventView? You can create another column which store time as your local timezone.


|||

I'm still learning SQL, so I don't understand all of your suggestions:

1. Avoid using function in where clause

Are you referring to the dateadd? How else can I limit the data to daily information?

2. create index in summary table.

Sorry, can't do that. I don't have control over the summary table - it's provided to my company by the owner of the software.

3. why you have to use dateadd(second,time+60*offset,'01/01/70'), in eventView

Because the software mfg stores data in the summary table in the format of "UTC seconds ". Once again, I cannot change the summary table.

|||If I were you, I won't use dateadd(....) in eventView. That is we still use UTC timestamp in eventView.

you can create @.begDay_UTC and @.PreviousDay_UTC and use them in this part of code in your where clause.
([timestr] < @.begDay_UTC and [timestr] >= @.PreviousDay_UTC)

If you can find out index Summary table used, that will help us find out how to improve performance.

Also, the estimated executions in SQL Server Management Studio will help us find out which part of code is the most expensive.|||

You can replace that while loop with a single query that will greatly improve the performance, here it is (I think, haven't tested but should be very close)

insert into @.myTable
SELECT TOP 20 count (*) as "Number of Alarms", [point], [description], [timestr], dateadd(day, 1, [timestr])
FROM [event].[dbo].[eventView]
where [message] not like '%NORMAL state%' and
[message] not like '%restored - normal%' and
[message] not like '%communication%restored%' and
[message] not like '%PLM - NORMAL%' and
[type] = 'alarm' and
[timestr] between @.begDay and @.myDay
group by [timestr], dateadd(day, 1, [timestr]), point, description, dbName
having count(*) > 1
order by "Number of Alarms" desc

|||

Agree this makes more sense and is easier to read. I've tested this and it does not make much difference in the performance.

I'm going to use your suggestion - much cleaner and easier to understand! Thanks...

|||

Think you are trying to grab the daily information using the GROUP BY.

This doesn't work for me because I'm trying to get a set of n days ( = 265 days on my system) daily top 20 values; this query only returns 20 values. I want 265 x 20 values.

|||

Which version of SQL Server are you using? You can simplify the WHILE loop in SQL Server 2005 using the APPLY operator. In SQL Server 2000, there is no easy way to write a single query - you have to do some sort of procedural loop which might be the fastest way. See below for an example in SQL Server 2005:

-- Top 2 order details based on quantity for each order:

select *
from Orders as o
cross apply (
select top 2 *
from "Order Details" as od
where od.OrderID = o.OrderID
order by od.Quantity
) as o2

|||

Our system runs MS SQL Server 2000. And since its the back end providing data archiving for our turnkey system, we will be using this for years to come.

When you say "procedural loop" that makes me think the BEGIN loop is the only way to do this job. Too bad 8-(

dont insert if record exists

/*if key values exist don't insert new record*/
SELECT

/*if exists don't insert*/
CASE
WHEN ISNULL(gradeId, -1) = -1 THEN
INSERT INTO tblScores
(gtStudentId, assignmentId, score)
VALUES (@.nStudent, @.nAssignment, 0)
END

FROM tblScores
WHERE gtStudentId = @.nStudent AND assignmentId = @.nAssignment

tblScores has two fields comprising its primary key (gtStudentId, assignmentId) and the gradeId field is a required filed in this table.

I'm getting syntax errors when I click check syntax (near keywords insert from and end).

one other note: this CASE END is nested inside a BEGIN END loop, is this the problem? Is the 'End" of the 'Case' closing the 'End' of the 'Begin'?

thanksI think the problem is that the INSERT-statement is not a result_expression (see BOL). Besides, execute an INSERT-statement from a select-case? What are you trying to accomplish?|||Besides, execute an INSERT-statement from a select-case? What are you trying to accomplish?

Its rather complicated but I'll try:

This code is inside a trigger for an enrollment table. If a student joins a class late (after assignements have been assigned) then the trigger creates new records in the scores table giving the new student a 0 for each assignment already assigned to the class. If a student gets unenrolled from the class the scores are maintained in case the student gets reenrolled. If the student gets reenrolled then I want to do a select on the scores table and bypass any scores the student may have had prior to being unenrolled.
INSERT-statement is not a result_expression
How does one do a conditional INSERT?|||IF NOT EXISTS(
SELECT TOP 1 gradeId
FROM tblScores
WHERE
gtStudentId = @.nStudent
AND assignmentId = @.nAssignment)
BEGIN

INSERT INTO tblScores (
gtStudentId,
assignmentId,
score)

SELECT
@.nStudent,
@.nAssignment,
0
END|||I forget which site...but we just did this...Gotta find the code I wrote...

Why would you want to double every access to the database?

Just handle the dup key error...in a calling sproc or the application...

EDIT: It was originally about Contraints...same thing

USE Northwind
GO

CREATE TABLE myTable99(Col1 char(1), CHECK(Col1 IN ('Y','N')))
GO

CREATE PROC mySproc99 @.x char(1) AS INSERT INTO myTable99 SELECT @.x RETURN
GO

CREATE PROC mySproc00
AS
DECLARE @.rc int
EXEC @.rc = mySproc99 'Y'
SELECT @.rc, @.@.ERROR
EXEC @.rc = mySproc99 'B'
SELECT @.rc, @.@.ERROR
GO

EXEC mySproc00
GO

DROP PROC mySproc00
DROP PROC mySproc99
DROP TABLE myTable99
GO|||I did not understand very clearly what you are trying to do... is it inserting into your table only those records that _do not already exist_ in that table?

If so, ty this:

Insert into tblScores (gtStudentId, assignmentId, score)

VALUES (@.nStudent, @.nAssignment, 0)

Where (not exists (Select * from tblScores
Where WHERE gtStudentId = @.nStudent
AND assignmentId = @.nAssignment))|||My Point, just attempt the insert...and trap the Error if it fails

Don't make 2 data access attempts to the database

Friday, February 17, 2012

doing a sum on a counted field

i have a dataset with a column that has 2 values. YES or NO.
i have to use a textbox inside a filtered(for yes) list that are to get
a count of yes. I have to use another list to do the same for NO.
using--Count(Fields!answer.Value). Now i want to perform a calculation
on the counts and for the life of me i cant do it. Since the txtboxes
are not contained in the same grouping i cannot reference them using
ReportItems!txt.value. Is there something i can change in my sql
statement to break out the counts as their own columns' please helpIn RS you could do a conditional sum using IIF:
=Sum( iif(Fields!answer.Value = "YES", 1, 0))
=Sum( iif(Fields!answer.Value = "NO", 1, 0))
-- Robert
This posting is provided "AS IS" with no warranties, and confers no rights.
"the finger" <m_kosurko@.hotmail.com> wrote in message
news:1115066710.809917.171580@.z14g2000cwz.googlegroups.com...
>i have a dataset with a column that has 2 values. YES or NO.
> i have to use a textbox inside a filtered(for yes) list that are to get
> a count of yes. I have to use another list to do the same for NO.
> using--Count(Fields!answer.Value). Now i want to perform a calculation
> on the counts and for the life of me i cant do it. Since the txtboxes
> are not contained in the same grouping i cannot reference them using
> ReportItems!txt.value. Is there something i can change in my sql
> statement to break out the counts as their own columns' please help
>

Doing a partial replace of a field

How could I do a replace of " " to " " in a string were the string is a
field and the values are variable. I would like to use Enterprise Manager o
r
a stored proc.
IE. Node.Nodename is equal to "have a nice day" I want to replace the "
" with " " after nice.> How could I do a replace of " " to " " in a string were the string is a
> field and the values are variable. I would like to use Enterprise Manager
> or
> a stored proc.
> IE. Node.Nodename is equal to "have a nice day" I want to replace the "
> " with " " after nice.
CREATE FUNCTION dbo.RegexReplace (
@.value VARCHAR(4000),
@.replacement VARCHAR(4000),
@.pattern VARCHAR(4000),
@.ignoreCase BIT
) RETURNS NVARCHAR(4000) AS
BEGIN
DECLARE @.hResult INT
DECLARE @.pRegex INT
EXECUTE @.hResult = sp_OACreate 'VBScript.RegExp', @.pRegex OUTPUT
EXECUTE @.hResult = sp_OASetProperty @.pRegex, 'Pattern', @.pattern
EXECUTE @.hResult = sp_OASetProperty @.pRegex, 'Global', TRUE
EXECUTE @.hResult = sp_OASetProperty @.pRegex, 'IgnoreCase', @.ignoreCase
EXECUTE @.hResult = sp_OAMethod @.pRegex, 'Replace', @.value OUTPUT, @.value,
@.replacement
EXECUTE @.hResult = sp_OADestroy @.pRegex
RETURN @.value
END
UPDATE YourTable
SET YourColumn = dbo.RegexReplace(YourColumn, ' ', '\s+', 1)
...My two ... But I'm kind of masochistic that way.
Peace & happy computing,
Mike Labosh, MCSD
"When you kill a man, you're a murderer.
Kill many, and you're a conqueror.
Kill them all and you're a god." -- Dave Mustane

doing a distinct insert?

hi there,

i was just wondering if it's possible to do a INSERT INTO, but with a DISTINCT on three columns at ones?

i want to insert values from a SELECT DISTINCT column_1, column_2, column_3 FROM table

thanks

SJB

Is this what you want to do?

INSERT newTab (col1, col2, col3)
SELECT DISTINCT col1, col2, col3 FROM otherTab

This works just fine.

/Kenneth

|||

yes

if i understand your ideea , this is the scenario:

I have 2 similar tables :

CREATE TABLE [dbo].[person](

[last] [nvarchar](50) NULL,

[first] [nvarchar](50) NULL,

[age] [nvarchar](50) NULL

) ON [PRIMARY]

CREATE TABLE [dbo].[person](

[last] [nvarchar](50) NULL,

[first] [nvarchar](50) NULL,

[age] [nvarchar](50) NULL

) ON [PRIMARY]

person have 3 rows with 2 duplicates

insert into person1

select distinct last,first,age

from person

insert only 2 rows

Tuesday, February 14, 2012

Does Tlog actually contain data ?

Say I update a column affecting a few thousand rows but do not issue a
commit .. I want to know if all these new values for the column, are they in
the Tlog or they actually get checkpointed to the data files as well ?
If they do not go to the data files, would it be true that only committed
data makes it to the data files while uncommitted data gets written to the
Transaction Log files ?
Thank you..Hassan
Yes , they are. In sql server 2000 a virtual tables "inserted" and
"deleted" (within a triggers) are reading the data from LOG file
"Hassan" <Hassan@.hotmail.com> wrote in message
news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
> Say I update a column affecting a few thousand rows but do not issue a
> commit .. I want to know if all these new values for the column, are they
> in the Tlog or they actually get checkpointed to the data files as well ?
> If they do not go to the data files, would it be true that only committed
> data makes it to the data files while uncommitted data gets written to the
> Transaction Log files ?
> Thank you..
>|||Yes
Only committed data will be written to the data files.
Regards
Amish Shah
http://shahamishm.tripod.com
Uri Dimant wrote:
> Hassan
> Yes , they are. In sql server 2000 a virtual tables "inserted" and
> "deleted" (within a triggers) are reading the data from LOG file
>
> "Hassan" <Hassan@.hotmail.com> wrote in message
> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
> > Say I update a column affecting a few thousand rows but do not issue a
> > commit .. I want to know if all these new values for the column, are they
> > in the Tlog or they actually get checkpointed to the data files as well ?
> >
> > If they do not go to the data files, would it be true that only committed
> > data makes it to the data files while uncommitted data gets written to the
> > Transaction Log files ?
> >
> > Thank you..
> >|||> Only committed data will be written to the data files.
That is not correct, I'm afraid. SQL Server can write pages that are dirty and not committed to the
database data files. This can happen by both the lazywriter and the checkpoint process. SQL Server
makes sure that all log records (which can be cached) up to that point are written first, however.
This is called the Write Ahead protocol (WAL). See
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx for details.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"amish" <shahamishm@.gmail.com> wrote in message
news:1166514127.349431.258400@.48g2000cwx.googlegroups.com...
> Yes
> Only committed data will be written to the data files.
> Regards
> Amish Shah
> http://shahamishm.tripod.com
>
> Uri Dimant wrote:
>> Hassan
>> Yes , they are. In sql server 2000 a virtual tables "inserted" and
>> "deleted" (within a triggers) are reading the data from LOG file
>>
>> "Hassan" <Hassan@.hotmail.com> wrote in message
>> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
>> > Say I update a column affecting a few thousand rows but do not issue a
>> > commit .. I want to know if all these new values for the column, are they
>> > in the Tlog or they actually get checkpointed to the data files as well ?
>> >
>> > If they do not go to the data files, would it be true that only committed
>> > data makes it to the data files while uncommitted data gets written to the
>> > Transaction Log files ?
>> >
>> > Thank you..
>> >
>|||Ok got it,
When checkpoint / lazywriter runs it writes dirty pages to disk and
lock that data to prevent committed view of it. User can view that data
only after data is committed. Nice to know...
Thanks Tibor.
Regards
Amish shah
Tibor Karaszi wrote:
> > Only committed data will be written to the data files.
> That is not correct, I'm afraid. SQL Server can write pages that are dirty and not committed to the
> database data files. This can happen by both the lazywriter and the checkpoint process. SQL Server
> makes sure that all log records (which can be cached) up to that point are written first, however.
> This is called the Write Ahead protocol (WAL). See
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx for details.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "amish" <shahamishm@.gmail.com> wrote in message
> news:1166514127.349431.258400@.48g2000cwx.googlegroups.com...
> > Yes
> > Only committed data will be written to the data files.
> >
> > Regards
> > Amish Shah
> > http://shahamishm.tripod.com
> >
> >
> > Uri Dimant wrote:
> >> Hassan
> >> Yes , they are. In sql server 2000 a virtual tables "inserted" and
> >> "deleted" (within a triggers) are reading the data from LOG file
> >>
> >>
> >> "Hassan" <Hassan@.hotmail.com> wrote in message
> >> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
> >> > Say I update a column affecting a few thousand rows but do not issue a
> >> > commit .. I want to know if all these new values for the column, are they
> >> > in the Tlog or they actually get checkpointed to the data files as well ?
> >> >
> >> > If they do not go to the data files, would it be true that only committed
> >> > data makes it to the data files while uncommitted data gets written to the
> >> > Transaction Log files ?
> >> >
> >> > Thank you..
> >> >
> >|||Data is always viewed and modified in cache. Checkpoint has nothing to do with locking. As soon as
you do, say, a modification, the rows are locked, and the lock is held until end of transaction.
Since other users also read the data from cache (never directly from disk), they will be blocked,
regardless of whether the page is dirty and regardless of the checkpoint process.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"amish" <shahamishm@.gmail.com> wrote in message
news:1166538947.256500.58740@.48g2000cwx.googlegroups.com...
> Ok got it,
> When checkpoint / lazywriter runs it writes dirty pages to disk and
> lock that data to prevent committed view of it. User can view that data
> only after data is committed. Nice to know...
> Thanks Tibor.
> Regards
> Amish shah
>
> Tibor Karaszi wrote:
>> > Only committed data will be written to the data files.
>> That is not correct, I'm afraid. SQL Server can write pages that are dirty and not committed to
>> the
>> database data files. This can happen by both the lazywriter and the checkpoint process. SQL
>> Server
>> makes sure that all log records (which can be cached) up to that point are written first,
>> however.
>> This is called the Write Ahead protocol (WAL). See
>> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx for details.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "amish" <shahamishm@.gmail.com> wrote in message
>> news:1166514127.349431.258400@.48g2000cwx.googlegroups.com...
>> > Yes
>> > Only committed data will be written to the data files.
>> >
>> > Regards
>> > Amish Shah
>> > http://shahamishm.tripod.com
>> >
>> >
>> > Uri Dimant wrote:
>> >> Hassan
>> >> Yes , they are. In sql server 2000 a virtual tables "inserted" and
>> >> "deleted" (within a triggers) are reading the data from LOG file
>> >>
>> >>
>> >> "Hassan" <Hassan@.hotmail.com> wrote in message
>> >> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
>> >> > Say I update a column affecting a few thousand rows but do not issue a
>> >> > commit .. I want to know if all these new values for the column, are they
>> >> > in the Tlog or they actually get checkpointed to the data files as well ?
>> >> >
>> >> > If they do not go to the data files, would it be true that only committed
>> >> > data makes it to the data files while uncommitted data gets written to the
>> >> > Transaction Log files ?
>> >> >
>> >> > Thank you..
>> >> >
>> >
>|||Tibor,
So the dirty pages that are not committed can be written to both the data
and log files. Does that mean that if its written to the data files, it
would get cleared from the log files even though its still not committed ?
Thanks
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%2350BLy3IHHA.4376@.TK2MSFTNGP03.phx.gbl...
> Data is always viewed and modified in cache. Checkpoint has nothing to do
> with locking. As soon as you do, say, a modification, the rows are locked,
> and the lock is held until end of transaction. Since other users also read
> the data from cache (never directly from disk), they will be blocked,
> regardless of whether the page is dirty and regardless of the checkpoint
> process.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "amish" <shahamishm@.gmail.com> wrote in message
> news:1166538947.256500.58740@.48g2000cwx.googlegroups.com...
>> Ok got it,
>> When checkpoint / lazywriter runs it writes dirty pages to disk and
>> lock that data to prevent committed view of it. User can view that data
>> only after data is committed. Nice to know...
>> Thanks Tibor.
>> Regards
>> Amish shah
>>
>> Tibor Karaszi wrote:
>> > Only committed data will be written to the data files.
>> That is not correct, I'm afraid. SQL Server can write pages that are
>> dirty and not committed to the
>> database data files. This can happen by both the lazywriter and the
>> checkpoint process. SQL Server
>> makes sure that all log records (which can be cached) up to that point
>> are written first, however.
>> This is called the Write Ahead protocol (WAL). See
>> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx
>> for details.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "amish" <shahamishm@.gmail.com> wrote in message
>> news:1166514127.349431.258400@.48g2000cwx.googlegroups.com...
>> > Yes
>> > Only committed data will be written to the data files.
>> >
>> > Regards
>> > Amish Shah
>> > http://shahamishm.tripod.com
>> >
>> >
>> > Uri Dimant wrote:
>> >> Hassan
>> >> Yes , they are. In sql server 2000 a virtual tables "inserted" and
>> >> "deleted" (within a triggers) are reading the data from LOG file
>> >>
>> >>
>> >> "Hassan" <Hassan@.hotmail.com> wrote in message
>> >> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
>> >> > Say I update a column affecting a few thousand rows but do not
>> >> > issue a
>> >> > commit .. I want to know if all these new values for the column,
>> >> > are they
>> >> > in the Tlog or they actually get checkpointed to the data files as
>> >> > well ?
>> >> >
>> >> > If they do not go to the data files, would it be true that only
>> >> > committed
>> >> > data makes it to the data files while uncommitted data gets written
>> >> > to the
>> >> > Transaction Log files ?
>> >> >
>> >> > Thank you..
>> >> >
>> >
>|||> So the dirty pages that are not committed can be written to both the data and log files.
Correct.
> Does that mean that if its written to the data files, it would get cleared from the log files even
> though its still not committed ?
By cleared, you mean like when you do a log backup and log records are removed from the ldf file?
the log can only be truncated to the oldest open transaction, so log records for uncommitted
transactions will not be removed "prematurely". If that was the case, then SQL Server would be in a
situation where it can't perform a rollback, which isn't acceptable.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Hassan" <Hassan@.hotmail.com> wrote in message news:eWb9Xu4IHHA.3676@.TK2MSFTNGP03.phx.gbl...
> Tibor,
> So the dirty pages that are not committed can be written to both the data and log files. Does that
> mean that if its written to the data files, it would get cleared from the log files even though
> its still not committed ?
> Thanks
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
> news:%2350BLy3IHHA.4376@.TK2MSFTNGP03.phx.gbl...
>> Data is always viewed and modified in cache. Checkpoint has nothing to do with locking. As soon
>> as you do, say, a modification, the rows are locked, and the lock is held until end of
>> transaction. Since other users also read the data from cache (never directly from disk), they
>> will be blocked, regardless of whether the page is dirty and regardless of the checkpoint
>> process.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "amish" <shahamishm@.gmail.com> wrote in message
>> news:1166538947.256500.58740@.48g2000cwx.googlegroups.com...
>> Ok got it,
>> When checkpoint / lazywriter runs it writes dirty pages to disk and
>> lock that data to prevent committed view of it. User can view that data
>> only after data is committed. Nice to know...
>> Thanks Tibor.
>> Regards
>> Amish shah
>>
>> Tibor Karaszi wrote:
>> > Only committed data will be written to the data files.
>> That is not correct, I'm afraid. SQL Server can write pages that are dirty and not committed to
>> the
>> database data files. This can happen by both the lazywriter and the checkpoint process. SQL
>> Server
>> makes sure that all log records (which can be cached) up to that point are written first,
>> however.
>> This is called the Write Ahead protocol (WAL). See
>> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx for details.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "amish" <shahamishm@.gmail.com> wrote in message
>> news:1166514127.349431.258400@.48g2000cwx.googlegroups.com...
>> > Yes
>> > Only committed data will be written to the data files.
>> >
>> > Regards
>> > Amish Shah
>> > http://shahamishm.tripod.com
>> >
>> >
>> > Uri Dimant wrote:
>> >> Hassan
>> >> Yes , they are. In sql server 2000 a virtual tables "inserted" and
>> >> "deleted" (within a triggers) are reading the data from LOG file
>> >>
>> >>
>> >> "Hassan" <Hassan@.hotmail.com> wrote in message
>> >> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
>> >> > Say I update a column affecting a few thousand rows but do not issue a
>> >> > commit .. I want to know if all these new values for the column, are they
>> >> > in the Tlog or they actually get checkpointed to the data files as well ?
>> >> >
>> >> > If they do not go to the data files, would it be true that only committed
>> >> > data makes it to the data files while uncommitted data gets written to the
>> >> > Transaction Log files ?
>> >> >
>> >> > Thank you..
>> >> >
>> >
>>
>

Does Tlog actually contain data ?

Say I update a column affecting a few thousand rows but do not issue a
commit .. I want to know if all these new values for the column, are they in
the Tlog or they actually get checkpointed to the data files as well ?
If they do not go to the data files, would it be true that only committed
data makes it to the data files while uncommitted data gets written to the
Transaction Log files ?
Thank you..
Hassan
Yes , they are. In sql server 2000 a virtual tables "inserted" and
"deleted" (within a triggers) are reading the data from LOG file
"Hassan" <Hassan@.hotmail.com> wrote in message
news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
> Say I update a column affecting a few thousand rows but do not issue a
> commit .. I want to know if all these new values for the column, are they
> in the Tlog or they actually get checkpointed to the data files as well ?
> If they do not go to the data files, would it be true that only committed
> data makes it to the data files while uncommitted data gets written to the
> Transaction Log files ?
> Thank you..
>
|||Yes
Only committed data will be written to the data files.
Regards
Amish Shah
http://shahamishm.tripod.com
Uri Dimant wrote:[vbcol=seagreen]
> Hassan
> Yes , they are. In sql server 2000 a virtual tables "inserted" and
> "deleted" (within a triggers) are reading the data from LOG file
>
> "Hassan" <Hassan@.hotmail.com> wrote in message
> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
|||Ok got it,
When checkpoint / lazywriter runs it writes dirty pages to disk and
lock that data to prevent committed view of it. User can view that data
only after data is committed. Nice to know...
Thanks Tibor.
Regards
Amish shah
Tibor Karaszi wrote:[vbcol=seagreen]
> That is not correct, I'm afraid. SQL Server can write pages that are dirty and not committed to the
> database data files. This can happen by both the lazywriter and the checkpoint process. SQL Server
> makes sure that all log records (which can be cached) up to that point are written first, however.
> This is called the Write Ahead protocol (WAL). See
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx for details.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "amish" <shahamishm@.gmail.com> wrote in message
> news:1166514127.349431.258400@.48g2000cwx.googlegro ups.com...
|||Tibor,
So the dirty pages that are not committed can be written to both the data
and log files. Does that mean that if its written to the data files, it
would get cleared from the log files even though its still not committed ?
Thanks
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%2350BLy3IHHA.4376@.TK2MSFTNGP03.phx.gbl...
> Data is always viewed and modified in cache. Checkpoint has nothing to do
> with locking. As soon as you do, say, a modification, the rows are locked,
> and the lock is held until end of transaction. Since other users also read
> the data from cache (never directly from disk), they will be blocked,
> regardless of whether the page is dirty and regardless of the checkpoint
> process.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "amish" <shahamishm@.gmail.com> wrote in message
> news:1166538947.256500.58740@.48g2000cwx.googlegrou ps.com...
>

Does Tlog actually contain data ?

Say I update a column affecting a few thousand rows but do not issue a
commit .. I want to know if all these new values for the column, are they in
the Tlog or they actually get checkpointed to the data files as well ?
If they do not go to the data files, would it be true that only committed
data makes it to the data files while uncommitted data gets written to the
Transaction Log files ?
Thank you..Hassan
Yes , they are. In sql server 2000 a virtual tables "inserted" and
"deleted" (within a triggers) are reading the data from LOG file
"Hassan" <Hassan@.hotmail.com> wrote in message
news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...
> Say I update a column affecting a few thousand rows but do not issue a
> commit .. I want to know if all these new values for the column, are they
> in the Tlog or they actually get checkpointed to the data files as well ?
> If they do not go to the data files, would it be true that only committed
> data makes it to the data files while uncommitted data gets written to the
> Transaction Log files ?
> Thank you..
>|||Yes
Only committed data will be written to the data files.
Regards
Amish Shah
http://shahamishm.tripod.com
Uri Dimant wrote:[vbcol=seagreen]
> Hassan
> Yes , they are. In sql server 2000 a virtual tables "inserted" and
> "deleted" (within a triggers) are reading the data from LOG file
>
> "Hassan" <Hassan@.hotmail.com> wrote in message
> news:uYcMghyIHHA.1252@.TK2MSFTNGP02.phx.gbl...|||> Only committed data will be written to the data files.
That is not correct, I'm afraid. SQL Server can write pages that are dirty a
nd not committed to the
database data files. This can happen by both the lazywriter and the checkpoi
nt process. SQL Server
makes sure that all log records (which can be cached) up to that point are w
ritten first, however.
This is called the Write Ahead protocol (WAL). See
http://www.microsoft.com/technet/pr...ver/default.asp
http://www.solidqualitylearning.com/
"amish" <shahamishm@.gmail.com> wrote in message
news:1166514127.349431.258400@.48g2000cwx.googlegroups.com...
> Yes
> Only committed data will be written to the data files.
> Regards
> Amish Shah
> http://shahamishm.tripod.com
>
> Uri Dimant wrote:
>|||Ok got it,
When checkpoint / lazywriter runs it writes dirty pages to disk and
lock that data to prevent committed view of it. User can view that data
only after data is committed. Nice to know...
Thanks Tibor.
Regards
Amish shah
Tibor Karaszi wrote:[vbcol=seagreen]
> That is not correct, I'm afraid. SQL Server can write pages that are dirty
and not committed to the
> database data files. This can happen by both the lazywriter and the checkp
oint process. SQL Server
> makes sure that all log records (which can be cached) up to that point are
written first, however.
> This is called the Write Ahead protocol (WAL). See
> http://www.microsoft.com/technet/pr...ver/default.asp
> http://www.solidqualitylearning.com/
>
> "amish" <shahamishm@.gmail.com> wrote in message
> news:1166514127.349431.258400@.48g2000cwx.googlegroups.com...|||Data is always viewed and modified in cache. Checkpoint has nothing to do wi
th locking. As soon as
you do, say, a modification, the rows are locked, and the lock is held until
end of transaction.
Since other users also read the data from cache (never directly from disk),
they will be blocked,
regardless of whether the page is dirty and regardless of the checkpoint pro
cess.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"amish" <shahamishm@.gmail.com> wrote in message
news:1166538947.256500.58740@.48g2000cwx.googlegroups.com...
> Ok got it,
> When checkpoint / lazywriter runs it writes dirty pages to disk and
> lock that data to prevent committed view of it. User can view that data
> only after data is committed. Nice to know...
> Thanks Tibor.
> Regards
> Amish shah
>
> Tibor Karaszi wrote:
>|||Tibor,
So the dirty pages that are not committed can be written to both the data
and log files. Does that mean that if its written to the data files, it
would get cleared from the log files even though its still not committed ?
Thanks
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%2350BLy3IHHA.4376@.TK2MSFTNGP03.phx.gbl...
> Data is always viewed and modified in cache. Checkpoint has nothing to do
> with locking. As soon as you do, say, a modification, the rows are locked,
> and the lock is held until end of transaction. Since other users also read
> the data from cache (never directly from disk), they will be blocked,
> regardless of whether the page is dirty and regardless of the checkpoint
> process.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "amish" <shahamishm@.gmail.com> wrote in message
> news:1166538947.256500.58740@.48g2000cwx.googlegroups.com...
>|||> So the dirty pages that are not committed can be written to both the data
and log files.
Correct.

> Does that mean that if its written to the data files, it would get cleared
from the log files even
> though its still not committed ?
By cleared, you mean like when you do a log backup and log records are remov
ed from the ldf file?
the log can only be truncated to the oldest open transaction, so log records
for uncommitted
transactions will not be removed "prematurely". If that was the case, then S
QL Server would be in a
situation where it can't perform a rollback, which isn't acceptable.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Hassan" <Hassan@.hotmail.com> wrote in message news:eWb9Xu4IHHA.3676@.TK2MSFTNGP03.phx.gbl...

> Tibor,
> So the dirty pages that are not committed can be written to both the data
and log files. Does that
> mean that if its written to the data files, it would get cleared from the
log files even though
> its still not committed ?
> Thanks
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n message
> news:%2350BLy3IHHA.4376@.TK2MSFTNGP03.phx.gbl...
>