锘??xml version="1.0" encoding="utf-8" standalone="yes"?> Recently I was tasked to share/branch a complex source tree in Visual SourceSafe 6.0 to do some changes with our project. For some reason, I found it very hard to do and I kept getting this error: What the heck does "A project cannot be shared under a descendant" mean? After some testing, it turns out the reason for this error is because VSS attempts to add the folder you specified to itself. That is why this is a recursive error: you are adding a folder to the folder you are recursively scanning. I tried specifying a path (ie. $/ProjectA-Copied), but it refuses to recognize the full path and keeps wanting to add to itself ($/ProjectA/ProjectA-Copied). For the life of me, I just couldn't get VSS to share to another location. After asking around some people, Paul (thanks!) told me that the problem is when you select Share on a project (aka folder) in VSS Explorer, you are selecting the destination project, not the source. Boy... great user experience isn't it? Armed with this key piece of knowledge, I created my destination project, selected Share on the destination project, in the share dialog selected the source project, and finally was able to successfully share that source project to the destination project. Also if you are branching after sharing and you have a project with lots of files, I recommend you use the Branch after share check box: If you share without branching in one step, then you will have to manually go into the destination project and branch all the files individually. Nope, you cannot branch on the project level! And finally, yes we're using Visual SourceSafe, yes I know it sucks, and finally (what I'm most happy about) yes we will be moving to Team Foundation Server and Visual Studio Team Suite!
public int DoSomething()
{
ReportError("Here's an error message");
return 0;
}
private void ReportError(string Message)
{
// Get the frame one step up the call tree
StackFrame CallStack = new StackFrame(1, true);
// These will now show the file and line number of the ReportError
call in the DoSomething() method
string SourceFile = CallStack.GetFileName(),
int SourceLine = CallStack.GetFileLineNumber(),
MyWriteToFile("Error: " + Message + " - File: " + SourceFile + "
Line: " + SourceLine.ToString());
}
}
浣犲彲浠ユ兂璞$殑鍑猴紝涓涓簱鍑芥暟鐨勫紓甯鎬俊鎭笉瀹屾暣錛屽浜庝粬鐨勭敤鎴鋒潵璇達(dá)紝鏄涔堜笉鍙嬪ソ鐨勪簨鎯呫傚亣璁?/span>
MSDN
涓?/span>
File.Open()
浼?xì)鎶涘?/span>
IOException
錛屼綘鐨勭▼搴忓氨寰堥毦鎯沖埌寰堣鐭╃殑瀵?/span>
IOException
鍋氫簡(jiǎn)
catch
錛岀劧鍚庢彁紺虹敤鎴鋒鏌ョ浉搴斾綅緗殑鏂囦歡鏄惁瀛樺湪
/
琚墦寮絳夈傚埆鎸囨湜
MSDN
鐨勯偅浜涗俊鎭兘澶熷緇堢鐢ㄦ埛鏈夊澶х殑甯姪錛屽お澶氫笉鎳傝綆楁満鐨勪漢鍦ㄥ偦鍛嗗憜鐨勬彙鐫榧犳爣浜?jiǎn)銆傚啀鍔犱笂
MicroSoft
鐨勬彁紺轟俊鎭湰韜氨瀛樺湪鐫鈥滅瓟闈炴墍闂濓紝鈥滆帿鍚嶅叾濡欌濈殑浜嬫儏錛岀敤鎴風(fēng)湅鍒拌繖浜涙儏鍐靛氨鏇翠笉鐭ラ亾鎬庝箞瑙e喅闂?shù)簡(jiǎn)銆傚啀鍔犱笂鑷繁鐨勫簲鐢ㄧ▼搴忎腑浼?xì)寮瑰囖Z竴涓釜鈥滆供鑴氣濈殑榪愯鏃墮敊璇殑閿欒妗嗭紝鐪熺殑涓嶆槸涓涓緢濡傛剰鐨勫垱浣溿?/span>
聽(tīng)
聽(tīng)
聽(tīng)
聽(tīng)
浣嗘槸錛屽鏋滀嬌鐢ㄧ殑鏄?/span>
Checked Exception
錛岃繖鏃跺欑紪璇戝櫒浼?xì)寮簶q綘鐨勫閮ㄦ帴鍙g殑寮傚父鐩稿綋鐨勫畬鏁達(dá)紝鏈璧風(fēng)爜鍙互鍋氬埌姣?/span>
MSDN
鐨勫紓甯哥被鍨嬮綈鏁淬?/span>
1.1
錛?/span>
.NET Framework 1.1
聽(tīng)聽(tīng) public StreamWriter(聽(tīng)聽(tīng)聽(tīng) string path聽(tīng) );
寮傚父綾誨瀷
|
鏉′歡
|
璁塊棶琚嫆緇濄?/span>
|
|
path
涓虹┖瀛楃涓?/span>
("")
銆?/span>
|
|
path
涓虹┖寮曠敤錛?/span>
Visual Basic
涓負(fù)
Nothing
錛夈?/span>
|
|
鎸囧畾鐨勮礬寰勬棤鏁堬紝姣斿鍦ㄦ湭鏄犲皠鐨勯┍鍔ㄥ櫒涓娿?/span>
|
|
鎸囧畾鐨勮礬寰勩佹枃浠跺悕鎴栬呬袱鑰呴兘瓚呭嚭浜?jiǎn)绯痪l熷畾涔夌殑鏈澶ч暱搴︺備緥濡傦紝鍦ㄥ熀浜?/span>
Windows
鐨勫鉤鍙頒笂錛岃礬寰勫繀欏誨皬浜?/span>
248
涓瓧絎︼紝鏂囦歡鍚嶅繀欏誨皬浜?/span>
260
涓瓧絎︺?/span>
|
|
path
鍖呭惈涓嶆紜垨鏃犳晥鐨勬枃浠跺悕銆佺洰褰曞悕鎴栧嵎鏍囩殑璇硶銆?/span>
|
|
璋冪敤鏂規(guī)病鏈夋墍瑕佹眰鐨勬潈闄愩?/span>
|
聽(tīng)
1.2 JDK 1.4.2 :
public FileWriter(String fileName)聽(tīng) throws IOException
Constructs a FileWriter object given a file name.
Parameters:
fileName
- String The system-dependent filename.
Throws:
IOException
- if the named file exists but is a directory rather than a regular file, does not exist but cannot be created, or cannot be opened for any other reason
聽(tīng)
2
銆佽В鏋?/span>
.NET 鐨?/span> Excetpion 鏄?/span> Unchecked 寮傚父錛屽鎴風(fēng)涓嶈姹傚幓 Check 浠g爜錛屼絾鏄?/span> JAVA 鐨勭粷澶ч儴鍒?/span> Checked 寮傚父錛屽畠瑕佹眰瀹㈡埛绔殑浠g爜媯(gè)嫻嬪紓甯搞?/span>
鍋囪涓涓繖鏍風(fēng)殑鍦烘櫙錛屾柟娉?/span> OutMethod 璋冪敤 InnerMethod 錛岃屽唴閮ㄦ柟娉?/span> InnerMethod 鎶涘嚭鐨勫紓甯?/span> InnerException 銆?/span>
瀵逛簬 Java 鐨?/span> CheckedException 錛屾垨鑰?/span> OutMethod 鍘繪姏鍑?/span> InnerException 錛屾垨鑰?/span> OutMethod 鎹曟崏 InnerException 錛堢劧鍚庡仛澶勭悊錛夈?/span>
聽(tīng)
鍐嶆潵瑙傚療涓涓?/span> JDK 鐨?/span> FileWriter 鐨勫紓甯稿0鏄庯紝鎴戞病鏈夎緇嗘祴璇曞叾鍦ㄥ悇縐嶅彲鑳藉嚭閿欐儏鍐典笅鎶涘嚭鐨?/span> IOException 鐨勬秷鎭紝浣嗘槸鍏跺垎綾昏繙榪滀笉濡?/span> .NET 鐨?/span> StreamWriter 銆傚亣璁?/span> Java 鎯崇収鎶?/span> .NET 鐨?/span> StreamWriter 錛屽浜?/span> Java 鐨勪嬌鐢ㄨ呮潵璇達(dá)紝鏃犲紓浜庢伓姊︺傚閮ㄧ殑浠g爜闇瑕佹崟鑾峰姝ゅ鐨勫紓甯告秷鎭紙涓嶆崟鎹夊氨浼?xì)鍦?/span> OutMethod 鎶涘嚭涓澶у爢鐨勫紓甯革紝闂緇х畫(huà)浼犳挱涓嬪幓錛岃繖鏄?/span> CheckException 鐨勪竴涓急鐐癸級(jí)銆備篃璁告鏄嚭浜庤繖鏍風(fēng)殑闂錛屾墍浠ユ澶?/span> Java 鎺ュ彛鐨勫紓甯稿0鏄庢瘮杈冪畝鍗曘?/span>
閭d箞鍋囪鎴戞槸涓涓簱璁捐鑰咃紝姝e湪鐢ㄥ埌浜?/span> IO 銆傚鏋滄垜浣跨敤 .NET 榪涜寮鍙戯紝瀵逛簬 IOException 鏉ヨ錛屾垜鏄惁鏈夊繀瑕佹崟鎹夊憿錛熸崟鎹夌殑鐩殑鏄負(fù)浜?jiǎn)鈥滃鐞嗏濓紝閭d箞瀵逛簬搴撹璁¤咃紝鏄劇劧榪欐椂鍊欓渶瑕侀氱煡鍏垛滃鎴風(fēng)▼搴忓憳鈥濆嚭閿欑殑鍘熷洜錛屾墍浠ヨ繖閲岀殑搴撹璁¤呯殑琛屼負(fù)鏈濂藉氨鏄滀笉澶勭悊鈥濄傚鏋滃鐞嗭紝閭e彧鑳芥槸鈥?/span> catch 銆佸啀 throw 鈥濄傞偅涔堣繖鏍風(fēng)殑澶勭悊鏄劇劧鏄棤鎰忎箟鐨勶紝鍥犱負(fù)鍘熷寮傚父宸茬粡瓚充互鎻愰啋瀹㈡埛紼嬪簭鍛樺嚭閿欑殑鍘熷洜浜?jiǎn)銆傚鏋滄崟鎹夛紝閭d唬鐮佷細(xì)鐗瑰埆鐨勪笐闄嬶紙鐩存帴 catch Exception 鐨勮涓烘槸涓嶅彲鍙栫殑錛夈?/span>
聽(tīng)
CheckedException 鐨勫彟澶栦竴涓己鐐瑰氨鏄滃皢 Exceotion 鍔犲叆浜?/span> Interface 鐨勮鏍煎0鏄庘溿傚亣璁?/span> OutMethod 璋冪敤浜?/span> InnerMethod 錛屾鏃?/span> InnerMethod 鐨勮璁¤呴渶瑕佸鍔犱竴涓紓甯革紝閭d箞浼?xì)鐩存帴濯?jiǎng)鍝嶅埌 OutMethod 銆傚綋鐒惰繖閲岀殑 InnerMethod 鐨勮璁¤呮鏃跺凡緇忓仛浜?jiǎn)鈥滀慨鏀規(guī)帴鍙e0鏄庘滅殑琛屼負(fù)銆?/span>
聽(tīng)
聽(tīng)
聽(tīng)聽(tīng)聽(tīng)闅忓悗寰呯畫(huà)......聽(tīng)
聽(tīng)
聽(tīng)
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) jinfeng_W...@msn.com - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛?9 Jan 2006 17:56:57 -0800
褰撳湴鏃墮棿錛?006騫?鏈?0鏃?鏄熸湡浜? 涓婂崍9鏃?6鍒?nbsp;
涓婚錛歵he difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鍒犻櫎 | 鎶ュ憡婊ョ敤琛屼負(fù)
hi, I have a question about the difference between
SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose(). Both
of them realize the function of releasing the connection to the
ConnectionPool? Do they have the same effection source code? If they
are different, who can tell me the differences? If they are same, why
MS gives the SqlConnection.IDisposable.Dispose, but only
SqlConnection.Dispose() method?
In the MSDN, there are following description about the
SqlConnection.IDisposable.Dispose Method:
"This member supports the .NET Framework infrastructure and is not
intended to be used directly from your code." what's the meaning of
it?
If the user has called the SqlConnection.IDisposable.Dispose() in the
client application, what probem results in? and if there are some
problem becomes, then why did MS give us such a method?
in the same, who can tell me the using of
"SqlConnection.ICloneable.Clone ",
"SqlConnection.IDbConnection.BeginTransaction" and
"SqlConnection.IDbConnection.CreateCommand"?
Anybody can help me to solve my question? thanks a lot.
鍥炲
2. Cor Ligthert [MVP]
1鏈?0鏃?涓嬪崍3鏃?0鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "Cor Ligthert [MVP]" <notmyfirstn...@planet.nl> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛欶ri, 20 Jan 2006 08:10:00 +0100
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鎶ュ憡婊ョ敤琛屼負(fù)
JinFeng,
Both of them are removing the connectionstring from the connectionobject.
They both have nothing to do direct with the ConnectionPool, although you
should close a connection either by close or dispose to let the
ConnectionPool do its job.
Every Interface can be used to get its members (in the implementing
contract) from the implementing class. Therefore it is an interface. A good
programmer start the name of his interfaces all with a capital I.
I hope that this gives an idea
Cor
鍥炲
3. jinfeng_Wang@msn.com
1鏈?0鏃?涓嬪崍5鏃?2鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "jinfeng_W...@msn.com" <jinfeng_W...@msn.com> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛?0 Jan 2006 01:52:12 -0800
褰撳湴鏃墮棿錛?006騫?鏈?0鏃?鏄熸湡浜? 涓嬪崍5鏃?2鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鍒犻櫎 | 鎶ュ憡婊ョ敤琛屼負(fù)
hello, Cor . thanks for you answer. but it's not what i want.
please read the following URL and look at the left frame:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpre...
there are two methods in the SQLConnection:
"Dispose" and "SqlConnection.IDisposable.Dispose" Method.
and in MSDN , the description about the
"SqlConnection.IDisposable.Dispose" is: "This member supports the .NET
Framework infrastructure and is not intended to be used directly from
your code".
can you help to tell me, what's the meaning of the above setence? MS
advice me that i should not call the
"SqlConnection.IDisposable.Dispose", yeah?
if there is a client code, like this (copied from MSDN, and i have
modified it):
public void ReadMyData()
{
String myConnString = "Persist Security Info=False;User
ID=sa;Initial Catalog=Northwind;Data Source=DTK-S-SVR;User
ID=sa;Pwd=sa";
string mySelectQuery = "SELECT OrderID, CustomerID FROM Orders";
SqlConnection myConnection = new SqlConnection(myConnString);
SqlCommand myCommand = new SqlCommand(mySelectQuery,myConnection);
myConnection.Open();
SqlDataReader myReader;
myReader = myCommand.ExecuteReader();
while (myReader.Read())
{
Console.WriteLine(myReader.GetInt32(0) + ", " +
myReader.GetString(1));
}
myReader.Close();
//myConnection.Close(); // old source code.
IDisposable disposable =myConnection as IDisposable; // new
source code.
disposable.Dispose(); // new source code.
}
will it cause some problem when cast the myConnection to IDisposable
and call disposable.Dispose() ?
Does the "new souce code" have the same effect as the "old source code"
??
if they are, why MS implements the method of "IDisposable.Dispose()"
explicity. I means: there is only the SqlConnection.Dispose() method.
if there is no problem here, then can you why MS said that "This member
supports the .NET Framework infrastructure and is not intended to be
used directly from your code."??
if there is some problem, then why MS does not make the methods of
SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
shared the same source code? and why MS tell us that there exists a
"SqlConnection.IDisposable.Dispose()" method, but warn us not to
call it??
in the same, how about the "SqlConnection.ICloneable.Clone ",
"SqlConnection.IDbConnection.BeginTransaction" and
"SqlConnection.IDbConnection.CreateCommand"?
i do not whether you have understand my question for my poor english
and expression.
can you help me? anyway, thanks to all of you.
鍥炲
4. Cor Ligthert [MVP]
1鏈?0鏃?涓嬪崍6鏃?9鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "Cor Ligthert [MVP]" <notmyfirstn...@planet.nl> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛欶ri, 20 Jan 2006 11:29:36 +0100
褰撳湴鏃墮棿錛?006騫?鏈?0鏃?鏄熸湡浜? 涓嬪崍6鏃?9鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鎶ュ憡婊ョ敤琛屼負(fù)
JinFeng,
Both Frans and me are not first language English speakers (AFAIK is Frans
born where the native language is Fries and I where it is Dutch). In my
opinion should you never excuse you for your English in these newsgroups.
Almost everybody, no matter how good he can speak a language, will make
errors in email messages (even in his own). I assume that you are using the
English version of Visual Studio so that tells enough.
I thought that the protected dispose is used by component (don't mix this up
with the rest from what I write about component). If you open a form or a
component (by the designer), than you see the implementation of IDisposable
direct. That part is used to do the most of the disposing.
Nice pages about Idisposable are these new ones
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpre...
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpre...
As I said, be aware that if you use a form or a component the code is
already in the designer created part.
However as long discussions has be done in this newsgroup. Disposing and
Closing have the same effect on the connection pool.
I hope this gives some information.
Cor
鍥炲
5. Frans Bouma [C# MVP]
1鏈?0鏃?涓嬪崍4鏃?2鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "Frans Bouma [C# MVP]" <perseus.usenetNOS...@xs4all.nl> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛欶ri, 20 Jan 2006 00:22:50 -0800
褰撳湴鏃墮棿錛?006騫?鏈?0鏃?鏄熸湡浜? 涓嬪崍4鏃?2鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鎶ュ憡婊ョ敤琛屼負(fù)
- 闅愯棌琚紩鐢ㄦ枃瀛?-
- 鏄劇ず寮曠敤鐨勬枃瀛?-
jinfeng_W...@msn.com wrote:
> hi, I have a question about the difference between
> SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose().
> Both of them realize the function of releasing the connection to the
> ConnectionPool? Do they have the same effection source code? If they
> are different, who can tell me the differences? If they are same, why
> MS gives the SqlConnection.IDisposable.Dispose, but only
> SqlConnection.Dispose() method?
> In the MSDN, there are following description about the
> SqlConnection.IDisposable.Dispose Method:
> "This member supports the .NET Framework infrastructure and is not
> intended to be used directly from your code." what's the meaning of
> it?
> If the user has called the SqlConnection.IDisposable.Dispose() in the
> client application, what probem results in? and if there are some
> problem becomes, then why did MS give us such a method?
> in the same, who can tell me the using of
> "SqlConnection.ICloneable.Clone ",
> "SqlConnection.IDbConnection.BeginTransaction" and
> "SqlConnection.IDbConnection.CreateCommand"?
> Anybody can help me to solve my question? thanks a lot.
what does 'SqlConnection.IDisposable.Dispose' mean? 'IDisposable'
isn't a property or something of SqlConnection. 'Dispose()' is a method
in Component, the base class of SqlConnection. SqlConnection overrides
Dispose(true), which is called from Dispose(), and therefore whatever
Dispose() you call, it doesnt matter.
FB
--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
鍥炲
6. jinfeng_Wang@msn.com
1鏈?0鏃?涓嬪崍5鏃?5鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "jinfeng_W...@msn.com" <jinfeng_W...@msn.com> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛?0 Jan 2006 01:55:46 -0800
褰撳湴鏃墮棿錛?006騫?鏈?0鏃?鏄熸湡浜? 涓嬪崍5鏃?5鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鍒犻櫎 | 鎶ュ憡婊ョ敤琛屼負(fù)
FB, i know that 'IDisposable' is not one property of the
SQLConnection. please read my answer to Cor.
'SqlConnection.IDisposable.Dispose' is copied from MSDN. :-)
I think it means that SQLConnection has implemnt the Dispose() method
of the IDisposable explicity.
I want to know the difference between the two method of
SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose().
Thanks to you!!! thanks!
鍥炲
7. Frans Bouma [C# MVP]
1鏈?1鏃?涓嬪崍6鏃?4鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "Frans Bouma [C# MVP]" <perseus.usenetNOS...@xs4all.nl> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛歋at, 21 Jan 2006 02:54:24 -0800
褰撳湴鏃墮棿錛?006騫?鏈?1鏃?鏄熸湡鍏? 涓嬪崍6鏃?4鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鎶ュ憡婊ョ敤琛屼負(fù)
jinfeng_W...@msn.com wrote:
> FB, i know that 'IDisposable' is not one property of the
> SQLConnection. please read my answer to Cor.
> 'SqlConnection.IDisposable.Dispose' is copied from MSDN. :-)
> I think it means that SQLConnection has implemnt the Dispose() method
> of the IDisposable explicity.
I thought that it meant that, but checking SqlConnection in reflector
I couldn't find IDisposable explicit implementations :D Hence my
question :)
> I want to know the difference between the two method of
> SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose().
I have no idea.
FB
--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
鍥炲
8. jinfeng_Wang@msn.com
1鏈?3鏃?涓婂崍9鏃?7鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "jinfeng_W...@msn.com" <jinfeng_W...@msn.com> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛?2 Jan 2006 17:17:00 -0800
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鍒犻櫎 | 鎶ュ憡婊ョ敤琛屼負(fù)
:-)
copied from Reflector:
System.Data.SqlClient.SqlConnection.System.Data.IDbConnection.BeginTransaction()
: IDbTransaction
IDbTransaction IDbConnection.BeginTransaction()
{
return this.BeginTransaction();
}
I think that the disposable is same as here .
but, but why MS does such a foolish action :-(
this folliwing is copied from MSDN in the
SqlConnection.IDbConnection.BeginTransaction Method ():
"This member supports the .NET Framework infrastructure and is not
intended to be used directly from your code."
faint to death. ~
鍥炲
9. William (Bill) Vaughn
1鏈?2鏃?涓婂崍2鏃?5鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "William \(Bill\) Vaughn" <billvaRemoveT...@nwlink.com> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛歋at, 21 Jan 2006 10:55:46 -0800
褰撳湴鏃墮棿錛?006騫?鏈?2鏃?鏄熸湡鏃? 涓婂崍2鏃?5鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鎶ュ憡婊ョ敤琛屼負(fù)
Ok... if you're that curious, use Reflector to walk through the .Net
Framework code to see what it does. You must have a lot more time on your
hands that we do.
Frankly, it does not matter what they do. You don't need to call
them--either of them. As long as you use Close on the Connection you're
fine. Sure, you can call Dispose if you want to, but it won't help the
problem you're trying to solve.
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
"Frans Bouma [C# MVP]" <perseus.usenetNOS...@xs4all.nl> wrote in message
news:xn0ehgc512v14b001@news.microsoft.com...
- 闅愯棌琚紩鐢ㄦ枃瀛?-
- 鏄劇ず寮曠敤鐨勬枃瀛?-
> jinfeng_W...@msn.com wrote:
>> hi, I have a question about the difference between
>> SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose().
>> Both of them realize the function of releasing the connection to the
>> ConnectionPool? Do they have the same effection source code? If they
>> are different, who can tell me the differences? If they are same, why
>> MS gives the SqlConnection.IDisposable.Dispose, but only
>> SqlConnection.Dispose() method?
>> In the MSDN, there are following description about the
>> SqlConnection.IDisposable.Dispose Method:
>> "This member supports the .NET Framework infrastructure and is not
>> intended to be used directly from your code." what's the meaning of
>> it?
>> If the user has called the SqlConnection.IDisposable.Dispose() in the
>> client application, what probem results in? and if there are some
>> problem becomes, then why did MS give us such a method?
>> in the same, who can tell me the using of
>> "SqlConnection.ICloneable.Clone ",
>> "SqlConnection.IDbConnection.BeginTransaction" and
>> "SqlConnection.IDbConnection.CreateCommand"?
>> Anybody can help me to solve my question? thanks a lot.
> what does 'SqlConnection.IDisposable.Dispose' mean? 'IDisposable'
> isn't a property or something of SqlConnection. 'Dispose()' is a method
> in Component, the base class of SqlConnection. SqlConnection overrides
> Dispose(true), which is called from Dispose(), and therefore whatever
> Dispose() you call, it doesnt matter.
> FB
> --
> ------------------------------------------------------------------------
> Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
> My .NET blog: http://weblogs.asp.net/fbouma
> Microsoft MVP (C#)
> ------------------------------------------------------------------------
鍥炲
10. Frans Bouma [C# MVP]
1鏈?2鏃?涓嬪崍6鏃?2鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "Frans Bouma [C# MVP]" <perseus.usenetNOS...@xs4all.nl> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛歋un, 22 Jan 2006 02:52:11 -0800
褰撳湴鏃墮棿錛?006騫?鏈?2鏃?鏄熸湡鏃? 涓嬪崍6鏃?2鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鎶ュ憡婊ョ敤琛屼負(fù)
William (Bill) Vaughn wrote:
> Ok... if you're that curious, use Reflector to walk through the .Net
> Framework code to see what it does. You must have a lot more time on
> your hands that we do.
> Frankly, it does not matter what they do. You don't need to call
> them--either of them. As long as you use Close on the Connection
> you're fine. Sure, you can call Dispose if you want to, but it won't
> help the problem you're trying to solve.
On a side note: not all ADO.NET providers' connection objects can
live without a Dispose call. For example the Firebird .NET provider and
the ODP.NET providers do need a call to Dispose to properly clean up.
(especially firebird, for cleaning up on the server side!).
FB, who still couldn't find an explicit IDisposable implementation on
SqlConnection...
--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
鍥炲
11. jinfeng_Wang@msn.com
1鏈?3鏃?涓婂崍8鏃?5鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "jinfeng_W...@msn.com" <jinfeng_W...@msn.com> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛?2 Jan 2006 16:55:26 -0800
褰撳湴鏃墮棿錛?006騫?鏈?3鏃?鏄熸湡涓) 涓婂崍8鏃?5鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鍒犻櫎 | 鎶ュ憡婊ョ敤琛屼負(fù)
thanks all of you.
If i was just a client programmer, i think all the information from you
and MSDN is enough.
but,... now i am trying to develop one new .NET Data Provider for own
database.
so i want to know what has happened in SQL .NET Data Provider.
it give me a dozens of puzzling interface.
here is "SqlConnection.IDisposable.Dispose() " and
"SqlConnection.Dispose()".
In fact, the SQLConnection is inherited from Compnent, which has
implemented the IDisposable.
This is very like the question of "deadly diamond", that one Class
inherits one Interface from TWO path.
IDisposable
/ \
| |
Component IDBConnection
| |
\ /
SQLConnection
if there is no difference between the
SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose(),
then MS has no need to implement the
SqlConnection.IDisposable.Dispose() explicitly. But, it has done
it, that means there are some differences between them. What's that?
MSDN has not told us, it just warn us not to call the
SqlConnection.IDisposable.Dispose() in the client program. :(
In the Oracle廬 Data Provider for .NET,
public sealed class OracleConnection : Component, IDbConnection,
ICloneable
but, OracleConnection has not implement the
IDbConnection.IDisposable.Disposable() explicity.
MS, :-(
14. Frans Bouma [C# MVP]
1鏈?3鏃?涓嬪崍5鏃?4鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "Frans Bouma [C# MVP]" <perseus.usenetNOS...@xs4all.nl> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛歁on, 23 Jan 2006 01:54:56 -0800
褰撳湴鏃墮棿錛?006騫?鏈?3鏃?鏄熸湡涓) 涓嬪崍5鏃?4鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鎶ュ憡婊ョ敤琛屼負(fù)
jinfeng_W...@msn.com wrote:
> If i was just a client programmer, i think all the information from
> you and MSDN is enough.
> but,... now i am trying to develop one new .NET Data Provider for
> own database.
there are guidelines for that if I'm not mistaken. And I think you
should relax a little. I work with a lot of ADO.NET providers and none
of them works the same as the others. So 'what should be done' is what
you think is the easiest for your users :).
So, 'Close()' should clean up, and for example connection.Dispose()
should also dispose commands, parameters etc.
In general derive a class from DbConnection, and override the specific
methods to add your own code.
> so i want to know what has happened in SQL .NET Data Provider.
> it give me a dozens of puzzling interface.
> here is "SqlConnection.IDisposable.Dispose() " and
> "SqlConnection.Dispose()".
Have you looked into the code with reflector ? I think you should do
that. :)
- 闅愯棌琚紩鐢ㄦ枃瀛?-
- 鏄劇ず寮曠敤鐨勬枃瀛?-
> In fact, the SQLConnection is inherited from Compnent, which has
> implemented the IDisposable.
> This is very like the question of "deadly diamond", that one Class
> inherits one Interface from TWO path.
> IDisposable
> / \
> | |
> Component IDBConnection
> | |
> \ /
> SQLConnection
> if there is no difference between the
> SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose(),
> then MS has no need to implement the
> SqlConnection.IDisposable.Dispose() explicitly. But, it has done
> it, that means there are some differences between them. What's that?
> MSDN has not told us, it just warn us not to call the
> SqlConnection.IDisposable.Dispose() in the client program. :(
I looked up the page, and I can only get to that page through the
index. So I think it's a mistake in the MSDN. As said by others in this
thread, look at the code through reflector first, then come back here
with questions.
Also, inherited interfaces are simply type definitions, not
implementations. So 1 routine can serve the Dispose() method of
multiple interfaces.
> In the Oracle廬 Data Provider for .NET,
> public sealed class OracleConnection : Component, IDbConnection,
> ICloneable
> but, OracleConnection has not implement the
> IDbConnection.IDisposable.Disposable() explicity.
> MS, :-(
Neither has sqlconnection!!! Look into the code! Just because it's an
error in the MSDN doesn't mean it's true. Why do you ignore what we
said and keep believing an errorous page in the msdn?
FB
--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
鍥炲
15. jinfeng_Wang@msn.com
1鏈?3鏃?涓嬪崍10鏃?5鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "jinfeng_W...@msn.com" <jinfeng_W...@msn.com> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛?3 Jan 2006 06:15:59 -0800
褰撳湴鏃墮棿錛?006騫?鏈?3鏃?鏄熸湡涓) 涓嬪崍10鏃?5鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鍒犻櫎 | 鎶ュ憡婊ョ敤琛屼負(fù)
:-)
MSDN Error錛?
why the MS make such an Error .
I have said that now i am developing one .NET Data Provider for my own
database, so i should know the everything what have happened in the SQL
.NET Data Provider, and learning something from the MS's code.
now i have realized why MS give us such a SQL .NET Data Provider.
wait a minute, and i will write it clearly.
鍥炲
16. jinfeng_Wang@msn.com
1鏈?3鏃?涓嬪崍11鏃?0鍒?nbsp; 鏄劇ず閫夐」
鏂伴椈璁哄潧錛歮icrosoft.public.dotnet.framework.adonet
鍙戜歡浜猴細(xì) "jinfeng_W...@msn.com" <jinfeng_W...@msn.com> - 鏌ユ壘姝や綔鑰呯殑甯栧瓙
鏃ユ湡錛?3 Jan 2006 07:10:37 -0800
褰撳湴鏃墮棿錛?006騫?鏈?3鏃?鏄熸湡涓) 涓嬪崍11鏃?0鍒?nbsp;
涓婚錛歊e: the difference between SqlConnection.IDisposable.Dispose() and SqlConnection.Dispose()
鍥炲 | 絳斿浣滆?| 杞彂 | 鎵撳嵃 | 鏄劇ず涓埆甯栧瓙 | 鏄劇ず鍘熷閭歡 | 鍒犻櫎 | 鎶ュ憡婊ョ敤琛屼負(fù)
firstly, let's take a look at the following code.
///--------------
public interface myInterface
{
Object getobject();
}
public class MyImplement : myInterface
{
public String getobject() //override.
{
return "str";
}
}///---------------
the code above can not be compiled, the compiler will have give us an
error. Because the method of MyImplement.getobject() has a wrong
return type, which is "String", but the method of
MyInterface.getobject() declared that the return type is "Object:. Here
the compiler take the "return type" into the "override" compile
progregess. If MyImplement.getobject() returns "String", this is not
the implementation of MyInterface.getobject().
now let's take a look at another code.
///---------------
public class AnotherImplement
{
public object getobject()
{
return new object();
}
public string getobject() //overload
{
return new string("");
}
}
///---------------
Here I want to overload the method of "getobject()", but the compiler
give us an error. Here the compiler DOES NOT take the "return type"
into the "overload" compile progregess (DIFFERENCE WITH THE OVERRIDE).
Now let's go back to the IDBConnection and SQLConnection.
In the interface of IDbConnection, it declares one method of
"createcomand".
///----------
IDbConnection {
....
IDbCommand CreateCommand();
....
}
///----------
When design the interface of IDBConnection, the designer does not
what's kind of "Command" will be returned, for example
"SQLCommand","OracleCommand". so in the IDbConnection.CreateCommand(),
it only return an interface of "IDbCommand".
Now we are design the SQLConnection which implements the interface of
"IDbConnection". if in the SQLConnection, it only write :
///---------
SQLConnection:IDbConnection {
....
SQLCommand CreateCommand() {
........
}
....
}
///---------
the compiler will give us an error, because the "return type" is
SQLCommand, but not IDbCommand(defined in the IDbConnection). THE
COMPILER TAKE THE RETURN TYPE INTO THE COMPILE PROGRESS. so we must
give another definition of "IDbConnection.CreateCommand()".
if we write the code as follows:
///---------
SQLConnection:IDbConnection {
....
SQLCommand CreateCommand() {
........
}
IDBCommand CreateCommand() {
.......
}
....
}
///---------
The compiler will give us an error. THE COMPILER DOES NOT TAKE THE
RETURN TYPE INTO THE COMPILE PROGRESS.
so the SQLConnection have to implement the
IDbConncection.CreateCommand explicyly.
///--------
SQLConnection:IDbConnection {
....
SQLCommand CreateCommand() {
........
}
IDBCommand IDbConnection.CreateCommand() {
return this.CreateCommand();
}
....
}
///---------
The above is the SQLConnection.
============BUT============
The MSDN told us the "we should call the
SQLConnection.IDbConnection.CreateCommand() in the client program".
that is, "we should use the SQLConnection.CreateCommand() in the client
program."
THIS will result a foolish result. In the client pogram, we will write
such a code:
///------------
void doSomething() {
SqlConnection conn = new SqlConnection(connectionString);
SqlCommand command = conn.CreateCommand();
// We MUST have call
SqlConnection.CreateCommand(),
// not
SqlConnection.IDbConnection.CreateCommand(),
// which is suggested by MSDN.
}
///------------
The above client code has violated the programming rule: "PROGRAMMING
TO INTERFACE".
According to the MSDN, we can not "PROGRAM TO IDbCommand".
The action of "SqlConnection.CreateCommand()" is difference with the
action of"IDbConnection.CreeateCommand()".
Now if i want to shift to the Oracle database. All of the client code
must be modified, because the above is "PROGRAM TO SqlCommand". :-(
What a foolish ADO.NET FrameWork.
I am a java programmer. If i design the SQLConnection, i will give a
such a implemention:
///--------
SQLConnection:IDbConnection {
....
IDBCommand CreateCommand() {
........
}
SqlCommand CreateSqlCommand() {
........
}
....
}
///---------
If the client programmer want to "PROGRAM TO SqlCommand", he will call
the method of SqlConnection.CreateSqlCommand(). If the client
programmer want to "PROGRAMM TO IDbConnection", he will call the
SqlConnection.CreateCommand(). Now The action of
"SqlConnection.CreateCommand()" is same as the action
of"IDbConnection.CreeateCommand()".
Because the client code is "PROGRAM TO IDbCommand", so if the client
program shifts to Oracle database, the client code will not have to
modify all the code.
鍥炲
After reading a post on the C# newsgroup asking for a EBCDIC to ASCII converter, and seeing one solution, I decided to write my own implementation. This page describes the implementation and its limitations, and a bit about EBCDIC itself.
Unfortunately it appears to be fairly tricky to get hold of many concrete specifications of EBCDIC. This is what I've managed to glean from various websites:
If you have any more information, particularly about the DBCS aspect, please mail me at skeet@pobox.com.
I managed to get hold of details of 47 EBCDIC encodings from http://std.dkuug.dk/i18n/charmaps/. To be honest, I don't really know what DKUUG is, so I'm really just hoping that the maps are accurate - they seem to be quite reasonable though. Each encoding has a name and several have aliases, although I currently ignore this aliasing.
My implementation consists of three projects, described below, of which only the middle one is of any interest to most people.
ebcdic.dat
. This is a console applicion built from a single C# source file.
ebcdic.dat
file generated by the reader. This library is all most users will need. More details are provided below.
The encoding library is very simple to use, as the encoding class (JonSkeet.Ebcdic.EbcdicEncoding
) is a subclass of the standard .NET System.Text.Encoding
class. To obtain an instance of the appropriate encoding, use EbcdicEncoding.GetEncoding (String)
passing it the name of the encoding you wish to use (eg EBCDIC-US
). You can find out the list of names of available encodings using the EbcdicEncoding.AllNames
property, which returns the names as an array of strings.
Once you have obtained an EbcdicEncoding
instance, use it like any other Encoding
: call GetString
, GetBytes
etc. The encoding does not save any state between requests, and can safely be used by many threads simultaneously. There is no need (or indeed facility) to release encoding resources when it is no longer needed. All encodings are created on the first use of the EbcdicEncoding
class, and maintained until the application domain is unloaded.
The following is a sample program to convert a file from EBCDIC-US to ASCII. It should be easy to see how to modify it to convert the other way, or to use a different encoding (eg from EBCDIC-UK, or to UTF-8).
using System; using System.IO; using System.Text; using JonSkeet.Ebcdic; public class ConvertFile { public static void Main(string[] args) { if (args.Length != 2) { Console.WriteLine ("Usage: ConvertFile <ebcdic file (input)> <ascii file (output)>"); return; } string inputFile = args[0]; string outputFile = args[1]; Encoding inputEncoding = EbcdicEncoding.GetEncoding ("EBCDIC-US"); Encoding outputEncoding = Encoding.ASCII; try { // Create the reader and writer with appropriate encodings. using (StreamReader inputReader = new StreamReader (inputFile, inputEncoding)) { using (StreamWriter outputWriter = new StreamWriter (outputFile, false, outputEncoding)) { // Create an 8K-char buffer char[] buffer = new char[8192]; int len=0; // Repeatedly read into the buffer and then write it out // until the reader has been exhausted. while ( (len=inputReader.Read (buffer, 0, buffer.Length)) > 0) { outputWriter.Write (buffer, 0, len); } } } } // Not much in the way of error handling here - you may well want // to do better handling yourself! catch (IOException e) { Console.WriteLine ("Exception during processing: {0}", e.Message); } } } |
Due to the lack of available information about the DBCS aspect of EBCDIC, this encoding class makes no effort whatsoever to simulate proper shifting. Shift out and shift in are merely encoded/decoded to/from their equivalent Unicode characters, and bytes between them are treated as if the shift had not taken place. (This means that a decoded byte array is always a string of the same length as the byte array, and vice versa).
Any byte not recognised to be from the specific encoding being used is decoded to the question mark character, '?'. Any character not recognised to be in the set of characters encoded by the specific encoding being used is encoded to the byte representing the question mark character, or to byte zero if the question mark character is not in the character set either.
The library doesn't currently have a strong-name, so can't be placed in the GAC. You may, however, download the source and modify
This was just an interesting half-day project. I have no desire to make any money out of this code whatsoever, but I hope it's interesting and useful to others. So, feel free to use it. If you have any questions about it, or just find it useful and wish to let me know, please mail me at skeet@pobox.com. You may use this code in commercial projects, either in binary or source form. You may change the namespace and the class names to suit your company, and modify the code if you wish. I'd rather you didn't try to pass it off as your own work, and specifically you may not sell just this code - at least not without asking me first. I make no claims whatsoever about this code - it comes with no warranty, not even the implied warranty of fitness for purpose, so don't sue me if it breaks something. (Mail me instead, so we can try to stop it from happening again.)
Identifier | Name |
---|---|
037 | IBM EBCDIC - U.S./Canada |
437 | OEM - United States |
500 | IBM EBCDIC - International |
708 | Arabic - ASMO 708 |
709 | Arabic - ASMO 449+, BCON V4 |
710 | Arabic - Transparent Arabic |
720 | Arabic - Transparent ASMO |
737 | OEM - Greek (formerly 437G) |
775 | OEM - Baltic |
850 | OEM - Multilingual Latin I |
852 | OEM - Latin II |
855 | OEM - Cyrillic (primarily Russian) |
857 | OEM - Turkish |
858 | OEM - Multilingual Latin I + Euro symbol |
860 | OEM - Portuguese |
861 | OEM - Icelandic |
862 | OEM - Hebrew |
863 | OEM - Canadian-French |
864 | OEM - Arabic |
865 | OEM - Nordic |
866 | OEM - Russian |
869 | OEM - Modern Greek |
870 | IBM EBCDIC - Multilingual/ROECE (Latin-2) |
874 | ANSI/OEM - Thai (same as 28605, ISO 8859-15) |
875 | IBM EBCDIC - Modern Greek |
932 | ANSI/OEM - Japanese, Shift-JIS |
936 | ANSI/OEM - Simplified Chinese (PRC, Singapore) |
949 | ANSI/OEM - Korean (Unified Hangul Code) |
950 | ANSI/OEM - Traditional Chinese (Taiwan; Hong Kong SAR, PRC) |
1026 | IBM EBCDIC - Turkish (Latin-5) |
1047 | IBM EBCDIC - Latin 1/Open System |
1140 | IBM EBCDIC - U.S./Canada (037 + Euro symbol) |
1141 | IBM EBCDIC - Germany (20273 + Euro symbol) |
1142 | IBM EBCDIC - Denmark/Norway (20277 + Euro symbol) |
1143 | IBM EBCDIC - Finland/Sweden (20278 + Euro symbol) |
1144 | IBM EBCDIC - Italy (20280 + Euro symbol) |
1145 | IBM EBCDIC - Latin America/Spain (20284 + Euro symbol) |
1146 | IBM EBCDIC - United Kingdom (20285 + Euro symbol) |
1147 | IBM EBCDIC - France (20297 + Euro symbol) |
1148 | IBM EBCDIC - International (500 + Euro symbol) |
1149 | IBM EBCDIC - Icelandic (20871 + Euro symbol) |
1200 | Unicode UCS-2 Little-Endian (BMP of ISO 10646) |
1201 | Unicode UCS-2 Big-Endian |
1250 | ANSI - Central European |
1251 | ANSI - Cyrillic |
1252 | ANSI - Latin I |
1253 | ANSI - Greek |
1254 | ANSI - Turkish |
1255 | ANSI - Hebrew |
1256 | ANSI - Arabic |
1257 | ANSI - Baltic |
1258 | ANSI/OEM - Vietnamese |
1361 | Korean (Johab) |
10000 | MAC - Roman |
10001 | MAC - Japanese |
10002 | MAC - Traditional Chinese (Big5) |
10003 | MAC - Korean |
10004 | MAC - Arabic |
10005 | MAC - Hebrew |
10006 | MAC - Greek I |
10007 | MAC - Cyrillic |
10008 | MAC - Simplified Chinese (GB 2312) |
10010 | MAC - Romania |
10017 | MAC - Ukraine |
10021 | MAC - Thai |
10029 | MAC - Latin II |
10079 | MAC - Icelandic |
10081 | MAC - Turkish |
10082 | MAC - Croatia |
12000 | Unicode UCS-4 Little-Endian |
12001 | Unicode UCS-4 Big-Endian |
20000 | CNS - Taiwan |
20001 | TCA - Taiwan |
20002 | Eten - Taiwan |
20003 | IBM5550 - Taiwan |
20004 | TeleText - Taiwan |
20005 | Wang - Taiwan |
20105 | IA5 IRV International Alphabet No. 5 (7-bit) |
20106 | IA5 German (7-bit) |
20107 | IA5 Swedish (7-bit) |
20108 | IA5 Norwegian (7-bit) |
20127 | US-ASCII (7-bit) |
20261 | T.61 |
20269 | ISO 6937 Non-Spacing Accent |
20273 | IBM EBCDIC - Germany |
20277 | IBM EBCDIC - Denmark/Norway |
20278 | IBM EBCDIC - Finland/Sweden |
20280 | IBM EBCDIC - Italy |
20284 | IBM EBCDIC - Latin America/Spain |
20285 | IBM EBCDIC - United Kingdom |
20290 | IBM EBCDIC - Japanese Katakana Extended |
20297 | IBM EBCDIC - France |
20420 | IBM EBCDIC - Arabic |
20423 | IBM EBCDIC - Greek |
20424 | IBM EBCDIC - Hebrew |
20833 | IBM EBCDIC - Korean Extended |
20838 | IBM EBCDIC - Thai |
20866 | Russian - KOI8-R |
20871 | IBM EBCDIC - Icelandic |
20880 | IBM EBCDIC - Cyrillic (Russian) |
20905 | IBM EBCDIC - Turkish |
20924 | IBM EBCDIC - Latin-1/Open System (1047 + Euro symbol) |
20932 | JIS X 0208-1990 & 0121-1990 |
20936 | Simplified Chinese (GB2312) |
21025 | IBM EBCDIC - Cyrillic (Serbian, Bulgarian) |
21027 | (deprecated) |
21866 | Ukrainian (KOI8-U) |
28591 | ISO 8859-1 Latin I |
28592 | ISO 8859-2 Central Europe |
28593 | ISO 8859-3 Latin 3 |
28594 | ISO 8859-4 Baltic |
28595 | ISO 8859-5 Cyrillic |
28596 | ISO 8859-6 Arabic |
28597 | ISO 8859-7 Greek |
28598 | ISO 8859-8 Hebrew |
28599 | ISO 8859-9 Latin 5 |
28605 | ISO 8859-15 Latin 9 |
29001 | Europa 3 |
38598 | ISO 8859-8 Hebrew |
50220 | ISO 2022 Japanese with no halfwidth Katakana |
50221 | ISO 2022 Japanese with halfwidth Katakana |
50222 | ISO 2022 Japanese JIS X 0201-1989 |
50225 | ISO 2022 Korean |
50227 | ISO 2022 Simplified Chinese |
50229 | ISO 2022 Traditional Chinese |
50930 | Japanese (Katakana) Extended |
50931 | US/Canada and Japanese |
50933 | Korean Extended and Korean |
50935 | Simplified Chinese Extended and Simplified Chinese |
50936 | Simplified Chinese |
50937 | US/Canada and Traditional Chinese |
50939 | Japanese (Latin) Extended and Japanese |
51932 | EUC - Japanese |
51936 | EUC - Simplified Chinese |
51949 | EUC - Korean |
51950 | EUC - Traditional Chinese |
52936 | HZ-GB2312 Simplified Chinese |
54936 | Windows XP: GB18030 Simplified Chinese (4 Byte) |
57002 | ISCII Devanagari |
57003 | ISCII Bengali |
57004 | ISCII Tamil |
57005 | ISCII Telugu |
57006 | ISCII Assamese |
57007 | ISCII Oriya |
57008 | ISCII Kannada |
57009 | ISCII Malayalam |
57010 | ISCII Gujarati |
57011 | ISCII Punjabi |
65000 | Unicode UTF-7 |
65001 | Unicode UTF-8 |
Dino Esposito
Wintellect
March 15, 2002
Table mapping is the process that controls how data adapters copy tables and columns of data from a physical data source to ADO.NET in-memory objects. A data adapter object utilizes the Fill method to populate a DataSet or a DataTable object with data retrieved by a SELECT command. Internally, the Fill method makes use of a data reader to get to the data and the metadata that describe the structure and content of the source tables. The data read is then copied into ad hoc memory containers (that is, the DataTable). The table mapping mechanism is the set of rules and parameters that lets you control how the SQL result sets are mapped onto in-memory objects.
The following code shows the typical way to collect data out of a data source using a data adapter:
SqlDataAdapter da; DataSet ds; da = new SqlDataAdapter(m_selectCommand, m_connectionString); ds = new DataSet(); da.Fill(ds);
Admittedly, this code isn't exactly rocket science and I can venture a guess that you are already familiar with it, and can successfully run it more than once. But what really happens behind the scenes of this code? Believe it or not, there's a little known object running behind the curtain whose nature and behavior heavily affect the final results.
When you run the code shown above, a new DataTable object is added to the (initially empty) DataSet for each result set that the execution of the SELECT statement may have generated. If you pass a non-empty DataSet to the Fill method, the contents of the result sets and the existing DataTable objects are merged as long as a match is found with the name of the DataTable. Similarly, when it comes to copying rows of data from the result set to a given DataTable, the contents of matching columns are merged. By contrast, if no match is found on the column name, then a new DataColumn object is created (with default settings) and added to the in-memory DataTable.
The question is, how does the adapter map the contents of the result sets into the DataSet constituent items? What tells the data adapter which tables and columns names to match? The TableMappings property of data adapters is the object behind the curtain that decides how tables in the result set map to the objects in the DataSet.
The mapping mechanism begins to work once the SELECT command is terminated and the data adapter has returned one or more result sets. The adapter gets a reference to an internal data reader object and starts processing the fetched data. By default, the data reader is positioned on the first result set. The following pseudocode describes what's going on:
int Fill(DataSet ds) { // Execute the SELECT command and gets a reader IDataReader dr = SelectCommand.ExecuteReader(); // Map the first result set to the DataSet and return the table bool bMoreToRead, bMoreResults; DataTable dt = MapCurrentResultSet(ds); // Copy rows from the result set to the specified DataTable while(true) { // Move to the next data row bMoreToRead = dr.Read(); if (!bMoreToRead) { // No more rows in this result set. More result sets? bMoreResults = dr.NextResult() if (!bMoreResults) break; else // Map this new result set and continue the loop dt = MapCurrentResultSet(ds); } else AddRowToDataTable(dt) } }
The Fill method maps the first result set to a DataTable object in the given DataSet. Next, it loops through the result set and adds rows of data to the DataTable. When the end of the result set is reached, the method looks for a new result set and repeats the operation.
Mapping a result set to a DataSet is a process that comprises two phases:
During the table mapping step, the data adapter has to find a name for the DataTable that will contain the rows in the result set being processed.
Each result set is given a default name that you might want to change at will. The default name of the result set depends on the signature of the Fill method that has been used for the call. For example, let's consider the two overloads below:
Fill(ds); Fill(ds, "MyTable");
In the former case, the name of the first result set defaults to Table. Further result sets are named Table1, Table2, and so on. In the latter case, the first result set is called MyTable and the others are named after it鈥?B>MyTable1, MyTable2, and so forth.
The adapter looks up its TableMappings collection for an entry that matches the default name of the result set. If a match is found, the adapter attempts to locate a DataTable object with the name specified in the mapping in the DataSet. If no such DataTable object exists, it is created and then filled. If such a DataTable exists in the DataSet, its contents are merged with the contents of the result set.
Figure 1. Mapping a result set onto a DataSet object
In Figure 1, I assume that the query produces at least three result sets. The TableMappings collection contains three default names and the corresponding mapping names. If the SELECT command creates a result set with a default name of Table, then its contents go into a new or existing DataTable object called Employees. How do you control this from a code standpoint? Look at the code snippet below:
SqlDataAdapter da = new SqlDataAdapter(...); DataSet ds = new DataSet(); DataTableMapping dtm1, dtm2, dtm3; dtm1 = da.TableMappings.Add("Table", "Employees"); dtm2 = da.TableMappings.Add("Table1", "Products"); dtm3 = da.TableMappings.Add("Table2", "Orders"); da.Fill(ds);
Of course, the default names you map onto your own names must coincide with the default names originated by the call to the Fill method. In other words, if you change the last line to da.Fill(ds, "MyTable");
, the code won't work any longer because the default names are now MyTable, MyTable1, and MyTable2 for which the above TableMappings collection has no entries.
You can have any number of table mappings that are not necessarily related to the expected number of result sets. For example, you can map only Table1, being the second result set returned by the command. In this case, the destination DataSet will hold three tables named Table, Products, and Table2.
The DataTableMapping object describes a mapped relationship between a result set and a DataTable object in a DataSet. The SourceTable property returns the default result set name, whereas DataSetTable contains the mapping name.
If you use Visual Studio廬 .NET, you can configure the table mappings in a visual manner by running the Data Adapter Configuration Wizard.
If table mapping ended here, then it wouldn't be such a big deal. In fact, if your goal is to give a mnemonic name to your DataSet tables, you can use the following code:
SqlDataAdapter da = new SqlDataAdapter(...); DataSet ds = new DataSet(); da.Fill(ds); ds.Tables["Table"].TableName = "Employees"; ds.Tables["Table1"].TableName = "Products";
The final effect is exactly the same. The mapping mechanism, though, has another, rather interesting facet鈥攃olumn mapping. The figure below extends the previous diagram and includes details of the column mapping.
Figure 2. Table and column mappings
The DataTableMapping object has a property called ColumnMappings that turns out to be a collection of DataColumnMapping objects. A column mapping represents a mapping between the name of a column in the result set and the name of the corresponding column in the DataTable object. Basically, the ultimate goal of DataColumnMapping object is that it enables you to use column names in a DataTable that are different from those in the data source.
SqlDataAdapter da = new SqlDataAdapter(...); DataSet ds = new DataSet(); DataTableMapping dtm1; dtm1 = da.TableMappings.Add("Table", "Employees"); dtm1.ColumnMappings.Add("employeeid", "ID"); dtm1.ColumnMappings.Add("firstname", "Name"); dtm1.ColumnMappings.Add("lastname", "Surname"); da.Fill(ds);
In the code above, I assume that the fetched result set has columns called employeeid
, firstname
, and lastname
. These columns have to be copied into an in-memory DataTable child of a DataSet. By default, the target DataColumn will have the same name as the source column. The column mapping mechanism, though, allows you to change the name of the in-memory column. For example, when the column employeeid
is copied to memory, it is renamed to ID and placed in a DataTable called Employees.
The name of the column is the only argument you can change at this level. Keep in mind that this entire mapping takes place automatically within the body of the Fill method. When Fill terminates and each column in the source result set has been transformed into a DataColumn object, you can intervene and apply further changes鈥攔elationships, constraints, primary key, read-only, auto-increment seed and step, support for null values, and more.
In summary, the Fill method accomplishes two main operations. First off, it maps the source result sets onto in-memory tables. Secondly, it fills the tables with the data fetched out of the physical data source. While accomplishing any of these tasks, the Fill method could raise some special exceptions. Conceptually, an exception is an anomalous situation that needs to be specifically addressed from a code standpoint. When the adapter can't find a table or a column mapping, and when a required DataTable or DataColumn can't be found in the target DataSet, the adapter throws a type of lightweight exception.
Unlike real exceptions that must necessarily be resolved in code, this special breed of adapter exceptions have to be resolved declaratively by choosing an action from a small set of feasible options. Adapters raise two types of lightweight exceptions:
A missing mapping action is required in two circumstances when the adapter is collecting data to fill the DataSet. You need a missing mapping action if a default name is not found in the TableMappings, or if a column name is not available in the table's ColumnMappings collection. You must customize the behavior of the adapter's MissingMappingAction property in order to handle such an exception. Feasible values for the property come from the MissingMappingAction enum type listed in the table below.
Value | Description |
---|---|
Error | A SystemException is generated whenever a missing column or a table is detected. |
Ignore | The unmapped column or table is ignored. |
Passthrough | Default option; add the missing table or column with the default name. |
Table 1. The MissingMappingAction enumeration
Unless you explicitly set the MissingMappingAction property prior to filling the adapter, it assumes a default value of Passthrough. As a result, the table or the column is added to the DataSet using the default name. For example, if no table mapping has been specified for the result set called Table, then the target DataTable takes the same name. In fact, the following statements end up adding a new DataTable to the DataSet called Table and MyTable respectively.
da.Fill(ds); da.Fill(ds, "MyTable");
If you set the MissingMappingAction property to Ignore, then any unmapped table or column is simply ignored. No error is detected, but there will be no content for the incriminating result set (or one of its columns) in the target DataSet.
If the MissingMappingAction property is set to Error, then the adapter is limited to throw a SystemException exception whenever a missing mapping is detected.
Once the adapter is done with the mapping phase, it starts populating the target DataSet with the contents of the selected result sets. Any required DataTable or DataColumn object that is not available in the target DataSet triggers another lightweight exception and requires another declarative action:missing schema action.
A missing schema action is required if the DataSet does not contain a table with the name that has been determined during the table mapping step. Similarly, the same action is required if the DataSet table does not contain a column with the expected mapping name. MissingSchemaAction is the property that you set to indicate the action you want to be taken in case of an insufficient table schema. Feasible values for the property come from the MissingSchemaAction enum type, listed in the table below.
Value | Description |
---|---|
Error | A SystemException is generated whenever a missing column or a table is detected. |
Ignore | The unmapped column or table is ignored. |
Add | Default option; complete the schema by adding any missing column or table. |
AddWithKey | Adds primary key and constraints. |
Table 2. The MissingSchemaAction enumeration
By default, the MissingSchemaAction property is set to Add. As a result, the DataSet is completed by adding any constituent item that is missing鈥?B>DataTable or DataColumn. Bear in mind, though, that the schema information added in this way is very limited. It only includes name and type. If you want extra information鈥攍ike primary key, auto-increment, read-only and null settings鈥攗se the AddWithKey option instead. Notice that even if you use the AddWithKey option, not all available information about the column is loaded into the DataColumn. For example, AddWithKey marks a column as auto-increment, but does not set the related seed and step properties. Also, the default value for the source column, if any, is not automatically copied. The primary key is imported, but not any extra indexes you may have set.
The other two options, Ignore and Error, work exactly as they did with the MissingMappingAction property.
Although I repeatedly talked about the actions in terms of (lightweight) exceptions, the actions you declare to execute in case of missing objects are not as expensive as true exceptions. On the other hand, this doesn't mean that your code is completely unaffected by such actions. More specifically, filling a DataSet that already contains all the needed schema information is a form of code optimization. This is especially true as long as your code is structured in such a way that you repeatedly fill an empty DataSet with a fixed schema. In this case, using a global DataSet object preloaded with schema information helps to prevent all those requests for recovery actions.
How can you fill a DataSet with the schema information that belongs to a group of result sets? Guess what, the data adapter objects have a tailor-made method鈥?B>FillSchema.
DataTable[] FillSchema(DataSet ds, SchemaType mappingMode);
FillSchema takes a DataSet and adds as many tables to it as needed by the SELECT command associated with the adapter. The method returns the various DataTable objects (only schema, no data) created in an array. The mapping mode parameter can be one of the values defined in the SchemaType enum.
Value | Description |
---|---|
Mapped | Apply any existing table mappings to the incoming schema. Configure the DataSet with the transformed schema. Preferable option. |
Source | Ignore any table mappings on the DataAdapter. Configure the DataSet using the incoming schema without applying any transformations. |
Table 3. The SchemaType enumeration
The options available are quite self-explanatory. Mapped describes what happens when mappings are defined. Source, instead, deliberately ignores any mappings you may have set. The tables in the DataSet retain their default name and all the columns maintain the original name they were given in the source tables.
To round out this discussion about table mappings, let's review a realistic scenario in which you might want to consider their use. Suppose that you have to manage different user profiles. Each profile requires you to access the same tables, but return a different set of columns. You can tackle this issue in a number of ways, but the ADO.NET table mapping mechanism may be the best.
The idea is that you use always a single query鈥攖he one targeted to the most privileged profile鈥攁nd then map to the resulting DataSet with only the columns specific of the current user profile. Here's some Visual Basic廬 code that illustrates the point:
Dim da As SqlDataAdapter da = New SqlDataAdapter(m_selectCommand, m_connectionString) Dim dtm As DataTableMapping dtm = da.TableMappings.Add(da.DefaultSourceTableName, "Employees") If bUserProfileAdmin Then dtm.ColumnMappings.Add("EmployeeID", "ID") dtm.ColumnMappings.Add("LastName", "Last Name") dtm.ColumnMappings.Add("FirstName", "Name") dtm.ColumnMappings.Add("Title", "Position") dtm.ColumnMappings.Add("HireDate", "Hired") Else dtm.ColumnMappings.Add("LastName", "Last Name") dtm.ColumnMappings.Add("FirstName", "Name") End If Dim ds As DataSet = New DataSet() da.MissingMappingAction = MissingMappingAction.Ignore da.MissingSchemaAction = MissingSchemaAction.Add da.Fill(ds)
In this simple case, the query returns only one result set that I decided to identify through its default name of Table. Notice that for the sake of generality you should use the DefaultSourceTableName property of the data adapter object, rather than literal name (Table). The table mapping defines different column mappings according to the role of the user. If the user is an administrator, the DataSet includes more columns. Of course, the actual implementation of concepts like roles and privileges is completely up to you. The key statement for all this to work as expected is the value of the MissingMappingAction property that has been set to Ignore. The result is that unmapped columns are just ignored. Finally, remember that case sensitivity is important for column names, and that the name of the column mapping must match the case of the source column name.
In this article, I reviewed the table mapping mechanism available in ADO.NET. Table mapping is the set of rules and behaviors that govern the passage of rows from the data source to an in-memory DataSet. The mapping consists of two steps鈥攖able and column mapping鈥攁nd is only the first phase of a broader operation that involves filling a DataSet operated by a data adapter object. The second phase begins when the target DataSet is actually populated. Any logical exception in the mapping and filling phases can be controlled by declaring which actions to take when a table or a column is not explicitly bound to a DataSet table or when a needed table or a column is not present in the DataSet.
What's the difference between @Register and @Import, and what's the right place for a non-system assembly DLL used by ASP.NET applications?
First and foremost, ASP.NET applications are .NET applications. As such, they need to link to any assemblies whose objects they plan to use. The @Register directive serves just this purpose. Any assembly you register with the page is then passed as a reference to the compiler of choice. The role of the @Import directive is less important as its function is to simplify the coding. @Import lets you import a namespace, not an assembly. An assembly can contain more namespaces. For example, the assembly system.data.dll contains System.Data, System.Data.OleDb, System.Data.SqlClient, and more.
Importing a namespace lets you write simpler code in the sense that you don't need to specify the full path to a given object. Importing System.Data allows you to use a data set through the class DataSet, instead of System.Data.DataSet. To use a DataSet, you can do without the @Import directive, but not without the reference to system.data.dll.
In particular, with ASP.NET applications you don't need to explicitly register any assemblies available in the Global Assembly Cache (GAC). You use @Register only to reference custom assemblies that have been registered with the system GAC.
Where do these assemblies reside? They must be placed in the BIN directory under the application's virtual directory. If this directory does not exist, you should create it. If your ASP.NET application does not use a virtual directory, then it implicitly runs from the Web server's root directory. Therefore, the BIN directory is below the Web server's root. For example, c:\inetpub\wwwroot\bin.
Dino Esposito is Developer Network Journal and MSDN News. In addition, he is the author of Building Web Solutions with ASP.NET and ADO.NET from Microsoft Press, and the cofounder of http://www.vb2themax.com/. You can reach Dino at dinoe@wintellect.com.
Dino Esposito
Wintellect
July 12, 2001
Download ViewManager.exe.
A large number of applications need to render data that is somehow related to other data. A well-known example that illustrates this is given by the very popular Customers table that needs to link to an equally famous Orders table. What do they have in common? An even more famous CustID field, of course. How do you typically solve the problem of rendering all the orders for a given customer? According to the constraints and the requirements of your application, you might be applying a number of feasible solutions.
In .NET, a new valuable tool can be added to your programmer's toolkit. This tool is the DataRelation object. It basically represents a parent/child relationship set between two tables. The DataRelation per se is not such a big deal. It gains a lot of importance and usefulness, though, when you look at it in light of the support that the DataSet and other ADO.NET objects and controls have for relations.
When you need to fetch data from related tables, a popular solution is to you use a plain old INNER JOIN SQL command. It merges the column of the two input tables that you need to work with into a single resultset . The following code creates a final resultset where you find two columns from Customers
and three columns from Orders
.
SELECT c.Name, c.City, o.Date, o.TotalPrice, oShipAddress FROM Customers AS c INNER JOIN Orders AS o ON c.CustID = o.CustID
The INNER JOIN statement involves the database server and ends up returning rows with a certain quantity of duplicated data. When you run the above query, you are aimed to obtain and process all the orders for a certain customer. So, you don't need to repeat the information鈥攁ddress, city, and the like鈥攜ou want to return about the customer. Nevertheless, this is exactly what you get back in the form of tabular structure from the resultset.
In ADO, the data shaping service lets you create hierarchical and, hence, even irregularly shaped recordsets. By using data shaping, the Customers/Orders relationship would have been expressed in terms of one row with all the customer information, plus an extra field pointing to a child recordset. The child recordset would feature one row for each order associated with the specified customer ID. The structure of the order rows is determined by the fields you want to query from the Orders table.
ADO data shaping requires you to write queries with a special language called the SHAPE language.
SHAPE {SELECT Name, City, Custid FROM Employees} APPEND ( {SELECT CustId, Date, TotalPrice, ShipAddress FROM Orders} AS oOrders RELATE CustId TO CustId)
Next, it executes all the necessary queries on the database within a single connection. The results are then shaped in terms of hierarchical recordsets on the way to the client thanks to a special OLE DB service.
INNER JOINs and data shaping have to do more with the way in which you fetch and store the related data. What about retrieving and showing this data in a client application?
What an INNER JOIN statement returns is a tabular structure and the extraction of needed information is completely up to you. With data shaping, the information at least comes in with a layout that lends itself quite well to be displayed the way it should be.
The customer information is distinct from the list of orders. You access it as a normal recordset field with a particular name.
Set rsCustomerOrders = oRS.Fields("oOrders").Value
To access the orders for a given customer, select the corresponding row on the Customers table and then access the field whose name matches the previously set relation.
To represent parent/child data relationships in ADO.NET, you are expected to use the DataRelation object. If you're familiar with ADO data shaping, you'll soon recognize, under the hood of a DataRelation object, the SHAPE language code snippet that I just showed above.
In ADO.NET, a DataRelation object is used to establish an in-memory relationship between two DataTable objects. The relationship sets on matching values found in one column the two tables have in common. A column in ADO.NET is represented by the DataColumn object.
Let's see how to code in ADO.NET the Customer/Orders relationship seen earlier.
DataColumn dcCustomerCustID, dcOrdersCustID; // Fill in the two DataColumn objects dcCustomerCustID = DataSet1.Tables["Customers"].Columns["CustID"]; dcOrdersCustID = DataSet1.Tables["Orders"].Columns["CustID"]; // Create the relationship between the two columns DataRelation relCustomerOrders; relCustomerOrders = new DataRelation("CustomerOrders", dcCustomerCustID, dcOrdersCustID);
A freshly created DataRelation object is rather useless if you don't add it to a DataSet.
DataSet1.Relations.Add(relCustomerOrders);
The DataSet object contains a Relations data member, which is a DataRelationCollection object where all the relations involving DataSet's tables are kept.
Notice that any relation is created between matching columns in two tables within the same DataSet. For this to happen, the .NET type of the columns must be identical. The .NET type of a column is given by the value returned by its DataType property.
When you have a parent/child relationship set between two tables, deleting or updating a value in the parent table can affect the rows of the child table. The impact on the child rows manifests in one of the following ways:
If you don't manage this directly through a ForeignKeyConstraint policy, the operation originates an exception.
So, if you're going to create an in-memory relation for cached, disconnected data that you plan to modify, make sure you first define a ForeignKeyConstraint object on the parent table. This ensures that any change that could affect the related tables is properly managed. You create a constraint like this:
ForeignKeyConstraint fkc; DataColumn dcCustomersCustID, dcOrdersCustID; // Get columns and create the constraint dcCustomersCustID = DataSet1.Tables["Customers"].Columns["CustID"]; dcCustomersCustID = DataSet1.Tables["Orders"].Columns["CustID"]; fkc = new ForeignKeyConstraint("CustomersFK", dcCustomersCustID, dcOrdersCustID); // Shape up the constraint for delete and update fkc.DeleteRule = Rule.SetNull; fkc.UpdateRule = Rule.Cascade;
A ForeignKeyConstraint is created on the parent table using the common column that the parent and child table share. To specify how a child table behaves whenever a row on the parent table is deleted or updated, you use the DeleteRule and UpdateRule fields. In this case, I set all the values on the child row to NULL when the corresponding parent row is deleted. Furthermore, any update simply trickles down from the parent row to the child row.
A DataTable object maintains its collection of ForeignKeyConstraint objects in a ConstraintCollection class that is accessible through the DataTable's Constraints property. As a final note, bear in mind that constraints are not enforced on tables if you set the EnforceConstraints property to false.
// Add the constraint and enforce it DataSet1.Tables["Customers"].Constraints.Add(fkc); DataSet1.EnforceConstraints = true;
Upon creation, ADO.NET verifies that the DataRelation object can be effectively created. This basically means that it checks whether or not all the involved columns are really part of the given tables. According to the syntax, in fact, you could pass the DataRelation's constructor a DataColumn object that you create on the fly with the right type and name but not the "right" column.
The DataRelation object and the involved DataTable objects are disjointed and independent objects until the relation is added to the DataSet's Relations collection. When this happens, ADO.NET prohibits any changes on the tables that could invalidate the relation. For example, changes on columns are disallowed, as well as moving the tables from one DataSet to another.
The DataRelation object also features a method called CheckStateForProperty that allows you to verify the validity of the relation before you add it to a DataSet. The controls operated by this method include checking whether parent and child tables belong to different DataSet objects, whether the columns type matches, and makes sure that parent and child columns aren't the same column.
You can call this method even if the DataRelation doesn't yet belong to a DataSet鈥攖he same DataSet to which the involved tables belong. CheckStateForProperty doesn't return a Boolean value to mean success or failure. In case of error, you will be notified through a DataException exception.
Given a parent/child data relation, how can you access the child rows associated with a parent row? In other words, assuming that you have the same Customers and Orders tables in the DataSet , how can you get the orders for a given Customers row?
In the code accessing the related data, you first obtain a DataRow object representing the parent row. You can do this in a number of ways, all strictly dependent on the structure of your application.
For example, if you know the primary key value that uniquely identifies that row, you can use the Find method on the RowsCollection object that represents the rows of the table.
DataRow r = DataSet1.Tables["Customers"].Rows.Find(nCustID);
Once you hold the right DataRow object, obtaining the child rows according to a given relation is as easy as calling the method GetChildRows to fill up an array of DataRow objects.
DataRow[] rgCustomerOrders; rgCustomerOrders = r.GetChildRows(relCustomerOrders);
GetChildRows takes one argument being a reference to a valid DataRelation object set on that DataSet. It returns the child rows as an array of DataRow objects. The following code shows how to dump all the orders of a given customer to the console .
for (int i=0; i < rgCustomerOrders.Length; i++) { DataRow tmp = rgCustomerOrders[i]; Console.WriteLine(tmp["CustID"].ToString()); Console.WriteLine(tmp["Date"].ToString()); Console.WriteLine(tmp["ShipAddress"].ToString()); Console.WriteLine(""); }
To be honest, GetChildRows can be called through a couple of other prototypes. You can certainly specify the relation as a DataRelation object as shown above. However, you could also indicate the relation by name.
rgCustomerOrders = r.GetChildRows("CustomerOrders");
In addition, you can select the version of the various rows that must be returned. You do this through the following signatures:
public DataRow[] GetChildRows( DataRelation relation, DataRowVersion version ); public DataRow[] GetChildRows( String relationName, DataRowVersion version );
You indicate the version of the rows through the values in the DataRowVersion enumeration. Possible values are Default, Original, Current, and Proposed.
Since the DataRelation object associates rows in one DataTable object with rows in another DataTable object, it lends itself very well to build master/detail views. The GetChildRows method is the key tool to build such views. If you find this behavior quite cool, you'll love the Windows Forms DataGrid control, which does even more.
You set the DataGrid control to show data coming from the source specified in its DataSource property. If DataSource happens to point to a cointainer control, like DataSet or DataViewManager, it will feature one row for each child table prefixed by a + symbol. Click there and you'll see the content of that table.
You can select a specific table by setting the DataMember property with the name of the child table.
theMasterGrid.DataSource = ds theMasterGrid.DataMember = "Customers"
If you have two datagrid controls on your form and want to realize a master/detail view, you can associate each grid with a different table, and then hook up for the event that fires when a new item is selected in the master table. At this point, you could access the array with related child rows, create a DataTable on the fly, and update the DataSource property of the detail DataGrid. Notice that you must use the SetDataBinding method at run time to reset the DataSource property for a Windows Forms datagrid.
This approach works just fine, but DataGrid controls can better perform this action. You can have the DataGrid automatically refresh the detail view if you use a special syntax when setting the DataMember property.
theChildGrid.DataSource = ds theChildGrid.DataMember = "Customers.CustomerOrders"
If you concatenate the name of the parent table with the name of an existing relation and put a dot character in the middle of the two, you instruct the DataGrid control to automatically and silently call GetChildRows for the CustomerOrders relation on the currently selected row of the Customers table.
The magic performed by the DataGrid doesn't end here. As long as the two grids have the same data source, the child one will automatically hook for the event that indicates that a new row has been selected in the master grid. At the end of the day, a relation, two Windows Forms datagrid and the following four lines of code, are enough to produce a free auto-refreshing master/detail view.
theMasterGrid.DataSource = ds theMasterGrid.DataMember = "Customers" theChildGrid.DataSource = ds theChildGrid.DataMember = "Customers.CustomerOrders"
A good question to raise at this point is, "How can the child grid know about the parent grid?" Basically, any DataGrid that is assigned the content of a parent/child relationship investigates the running instance of another grid object in the same form with the same content in the DataSource property and with a DataMember that matches the first part of its member expression (Customers in the above example).
Relations constitute a key information for DataSet objects. DataSets, though, can switch any time from their typical relational representation to a hierarchical one based on XML. When relations are set, internally the DataSet object works in a way that resembles what happens with the ADO data shaping representation. An extra field is spookily added to each row to link to its group of child rows in the child table. What happens of this information when you switch from the traditional representation to XML?
You can do this in two ways. Either you create a new instance of the XmlDataDocument class based on the DataSet
XmlDataDocument xmlDoc = new XmlDataDocument(DataSet1);
Or you can save the whole DataSet to XML using the WriteXml method.
In both cases, the results you get differ quite a bit depending on the value that the Nested property has on the DataRelation object. Nested is a Boolean value that is set to false by default. It controls the way in which the child rows of a relation are rendered in XML. Two DataSet tables are rendered like this:
<CustomerOrders> <Customers> <CustID>1</CustID> <Name>Acme Inc</Name> </Customers> <Customers> <CustID>2</CustID> <Name>Foo Corp</Name> </Customers> <Orders> <CustID>1</CustID> <Date>2000-09-25T00:00:00</Date> </Orders> <CustomerOrders>
Each record is rendered with a subtree with the table name and as many text nodes as the number of columns. This representation doesn't change if you have relations set as long as Nested remains set to false.
If you set Nested to true, then all the order nodes for any given customer will be rendered as a child subtree.
<CustomerOrders><Customers>
<CustID>1</CustID>
<Orders>
<CustID>1</CustID>
<Date>2000-09-25T00:00:00</Date>
</Orders>
<Name>Acme Inc</Name>
</Customers>
<Customers> <CustID>2</CustID> <Name>Foo Corp</Name> </Customers> <CustomerOrders>
All the orders that correspond to a given customer go under the node of that customer, building up a more reasonable and useful structure.
DataRelation is the ADO.NET object that represents a logical link between two tables through a common column. The DataRelation defines the parent/child relationship, but the tables and the columns remain separate entities. Once a relation has been set, you can easily access the child rows of the detail table either by using methods on the DataRow object or switching to the XML hierarchical representation. The DataRelation object looks like an in-memory INNER JOIN, but without the same redundancy of information.
Dialog Box: Modifications Through the ViewWhat's the role of all those AllowXXX properties on the DataView object? Can I modify rows or not through a DataTable's view? A DataView is an object that provides a particular representation of the content of a given table. The DataView and the DataTable are independent objects, and the DataView simply holds a link to the parent table. The DataView doesn't cache the table, nor does it makes an internal copy of the data. The DataView is simply an object that contains some information about the way and the order in which the content of the table must be shown. The core function that you execute on a DataView is the enumeration of the items. This happens explicitly when you loop through its content or implicitly when you assign the DataView to the DataSource property of a data-bound control. When a data-bound control calls its DataBind method, the content of the data source is enumerated and the control's Items collection is properly populated. When a view is involved, the caller enumerates through the view, and the view in turn enumerates through the parent table and applies sorting expressions and filters. The AllowEdit, AllowDelete, and AllowNew Boolean properties indicate whether the DataView and the user interface associated with it allow updates, deletions, and insertions. This doesn't affect the way in which the parent table is updated. Those properties apply only to the edit operations carried out through the DataView control or a data-bound control that uses it. |
Dino Esposito
Wintellect
November 8, 2001
The interaction between ADO.NET applications and the underlying data sources is based on a dual architecture with two-way channels. You access a data source to read and write rows using either individual and provider-specific commands or batch update procedures. In both cases, the data access results in a complete two-way binding and involves different objects and methods. You use command classes like SqlCommand and OleDbCommand to execute single commands. You would use data adapter objects to download disconnected data and submit sets of updated rows. Individual commands return data through data reader objects, whereas the DataSet is the container object that the data adapter utilizes to return and submit blocks of records.
Updates accomplished through individual commands, stored procedures, and in general any command text the managed provider understands, are normally referred to as update. An update command always carries new data out embedded in the body of the statement. The update command always requires an open connection, and may also require an ongoing or a new transaction. The batch update is the offshoot of a slightly different approach. At the highest level of abstraction, you don't issue a command, no matter how complex it could be. Instead, you submit a snapshot of the current rows as modified on the client and wait for the data source approval. The key concept behind batch update is the concept of data disconnection. You download a table of rows, typically a DataSet, modify it as needed on the client, and then submit the new image of those rows to the database server. You submit changes rather than executing a command that will create changes to the data source. This is the essential difference between update, which I covered in my July column, and batch update.
The figure below illustrates the dual update architecture of ADO.NET.
Figure 1. The dual two-way interaction between ADO.NET applications and the data source
Before going any further with the details of ADO.NET batch update, I'd like to clarify one aspect of the batch update model that often leads to some misunderstanding. Although update and batch update are philosophically different, in terms of the actual implementation within ADO.NET, they follow the same update model. Both update and batch update are accomplished through direct and provider-specific statements. Of course, since batch update normally involves more rows, the statements are grouped into a batch call. Batch update loops through the rows of the target DataSet and issue the proper update command (INSERT, DELETE, or UPDATE) whenever an updated row is found. In correspondence of an updated row, a predefined direct SQL command is run. In essence, this is batch update.
This comes as no surprise. In fact, if batch update were using a completely different model of update, then a special support would have been required from the data source. (This is what happens when you submit XML updategrams to SQL Server 2000.) Batch update is just a client-provided software mechanism to simplify the submission of multiple row updates. In any case, each new row submission is always made through the normal channels of data source direct commands.
So far I've only hinted at SQL commands, but these hints are a sure sign of an important difference between the ADO and the ADO.NET batch update implementation. In ADO, batch update was only possible for SQL-based data sources. In ADO.NET, instead, batch update is possible for any kind of managed provider, including those that should not be exposing their data through the SQL query language. That said, it's about time I start reviewing the key aspects of ADO.NET batch update programming.
ADO.NET batch update takes place through the Update method of the data adapter object. Data can be submitted only on a per-table basis. If you call Update without specifying a table name, a default name of Table is assumed. If no table exists with that name, an exception is raised. Update first examines the RowState property of each table row and then prepares a tailor-made INSERT, UPDATE, or DELETE statement for each inserted, updated, or deleted row in the specified table.
The Update method has several overloads. It can take a pair given by the DataSet and the DataTable, a DataTable, or even an array of DataRow objects. The method returns an integer value being the number of rows successfully updated.
To minimize the network traffic, you normally invoke Update on a subset of the DataSet you are working on. Needless to say, this subset contains only the rows that have been modified in the meantime. You get such a subset by calling the DataSet's GetChanges method.
if (ds.HasChanges()) { DataSet dsChanges = ds.GetChanges(); adapter.Update(dsChanges, "MyTable"); }
You check the DataSet for changes using the HasChanges method instead. HasChanges returns a Boolean value.
The DataSet returned by GetChanges contains the rows that have been inserted, deleted, or modified in the meantime. But in the meantime of what? This is a tricky aspect of ADO.NET batch update and has to do with the current state of a table row.
Each row in a DataTable is rendered through a DataRow object. A DataRow object mainly exists to be an element of the Rows collection of a parent DataTable object. Conceptually, a database row is inherently linked to the structure of a given table. Just for this case, the DataRow class in ADO.NET does not provide a public constructor. The only way to create a new DataRow object is by means of a method called NewRow, on a particular living instance of a DataTable object. Upon creation, a row does not yet belong to the Rows collection of the parent table, but its relationship with this collection determines the state of the row. The following table shows the feasible values for the RowState property. Those values are grouped in the DataRowState enumeration.
Row State | Description |
---|---|
Added | The row has been added to the table. |
Deleted | The row has been marked for deletion from the parent table. |
Detached | Either the row has been created but not added to the table, or the row has been removed from the collection of table rows. |
Modified | Some columns within the row have been changed. |
Unchanged | No changes made to the row since creation or since the last call to the AcceptChanges method. |
The RowState property of each row influences the return value of the HasChanges method and the contents of the child DataSet returned by GetChanges.
From the range of the feasible values, it turns out that the value of RowState mostly depends on the operation that has been performed on the row. ADO.NET tables implement a transaction-like commit model based on two methods鈥?B>AcceptChanges and RejectChanges. When the table is downloaded from the data source, or freshly created in memory, all the rows are unchanged. All the changes you enter are not immediately persistent and can be rolled back at any time by calling RejectChanges. You can call the RejectChanges method at three levels:
The method AcceptChanges has the power to commit all the ongoing changes. It makes the DataSet accept the current values as the new original values. As a result, all the pending changes are cleared up. Just as RejectChanges, AcceptChanges can also be called on the whole DataSet, on a particular table, or an individual row.
When you start a batch update operation, only the rows marked as Added, Deleted, and Modified are taken into account for submission. If you happen to call AcceptChanges prior to batch-update, no change will be persisted to the data source.
On the other hand, once the batch update operation has successfully completed, you must call AcceptChanges to clear pending changes and mark the current DataSet values as the original values. Notice that omitting a final call to AcceptChanges would maintain pending changes in the DataSet with the result to have them re-issued next time you batch update.
// Get changes in the DataSet dsChanges = ds.GetChanges(); // Performs the batch update for the given table da.Update(dsChanges, strTable); // Clears any pending change in memory ds.AcceptChanges();
The code above illustrates the three main steps behind ADO.NET batch update.
If you delete a row from a DataSet table, pay attention to the method you use鈥?B>Delete or Remove. The Delete method performs a logical deletion by marking the row as Deleted. The Remove method, instead, physically removes the row from the Rows collection. As a result, a row deleted through Remove is not marked for deletion and subsequently not processed during batch update. If the ultimate goal of your deletion is removing the row from the data source, then use Delete.
Three operations can modify the state of a table:
For each of these key operations, the data adapter defines a tailor-made command object that is exposed as a property. Such properties include InsertCommand, DeleteCommand, and UpdateCommand. The programmer is responsible to assign these properties meaningful command objects鈥攆or example, SqlCommand objects.
Just the availability of InsertCommand, DeleteCommand, and UpdateCommand properties represents a quantum leap from ADO. Such properties give you unprecedented control over the way in which in-memory updates are submitted to the database server. If you happen to dislike the update code that ADO.NET generates, you can now modify it without renouncing the overall feature of batch update. With ADO you had no control on the SQL commands silently generated by the library. In ADO.NET, instead, publicly exposed command objects allow you to apply updates using made-to-measure stored procedures or SQL statements that better match your user expectations. In particular, you can have the batch update system work with cross-referenced tables and even target non-SQL data providers like Active Directory鈩?or Indexing Services.
The update commands are expected to run for each changed row in the table and have to be general enough to accommodate different values. Command parameters would be good at this kind of task as long as you can you bind them to the values of a database column. ADO.NET parameter objects expose two properties, like SourceColumn and SourceVersion, which provide for this type of binding. SourceColumn, in particular, represents an indirect way to indicate the parameter's value. Instead of using the Value property and setting it with a scalar value, you could set the SourceColumn property with a column name and have the batch update mechanism to extract the effective value from time to time.
SourceVersion indicates which value should be read on the column. By default, ADO.NET returns the current value of the row. As an alternative, you could select the original value and all the values found in the DataRowVersion enumeration.
If you want to batch update a couple of columns on the Northwind's Employees table, you can use the following, handcrafted commands. The INSERT command is defined as follows:
StringBuilder sb = new StringBuilder(""); sb.Append("INSERT Employees (firstname, lastname) VALUES("); sb.Append("@sFirstName, @sLastName)"); da.InsertCommand = new SqlCommand(); da.InsertCommand.CommandText = sb.ToString(); da.InsertCommand.Connection = conn;
All the parameters will be added to the data adapter's Parameters collection and bound to a DataTable column.
SqlParameter p1 = new SqlParameter("@sFirstName", SqlDbType.NVarChar, 10); p1.SourceVersion = DataRowVersion.Current; p1.SourceColumn = "firstname"; da.InsertCommand.Parameters.Add(p1); SqlParameter p2 = new SqlParameter("@sLastName", SqlDbType.NVarChar, 30); p2.SourceVersion = DataRowVersion.Current; p2.SourceColumn = "lastname"; da.InsertCommand.Parameters.Add(p2);
Notice that auto-increment columns should not be listed in the syntax of the INSERT command as their value is being generated by the data source.
The UPDATE command needs to identify one particular row to apply its changes. You do this using a WHERE clause in which a parameterized value is compared against a key field. In this case, the parameter used in the WHERE clause must be bound to the original value of the row, instead of the current value.
StringBuilder sb = new StringBuilder(""); sb.Append("UPDATE Employees SET "); sb.Append("lastname=@sLastName, firstname=@sFirstName "); sb.Append("WHERE employeeid=@nEmpID"); da.UpdateCommand = new SqlCommand(); da.UpdateCommand.CommandText = sb.ToString(); da.UpdateCommand.Connection = conn; // p1 and p2 set as before : p3 = new SqlParameter("@nEmpID", SqlDbType.Int); p3.SourceVersion = DataRowVersion.Original; p3.SourceColumn = "employeeid"; da.UpdateCommand.Parameters.Add(p3);
Finally, the DELETE command requires a WHERE clause to identify the row to remove. In this case, you need to use the original version of the row to bind the parameter value.
StringBuilder sb = new StringBuilder(""); sb.Append("DELETE FROM Employees "); sb.Append("WHERE employeeid=@nEmpID"); da.DeleteCommand = new SqlCommand(); da.DeleteCommand.CommandText = sb.ToString(); da.DeleteCommand.Connection = conn; p1 = new SqlParameter("@nEmpID", SqlDbType.Int); p1.SourceVersion = DataRowVersion.Original; p1.SourceColumn = "employeeid"; da.DeleteCommand.Parameters.Add(p1);
The actual structure of the SQL commands is up to you. They don't need to be plain SQL statements, and can be more effective stored procedures if you want to go that direction. If there's a concrete risk that someone else could have updated the row that you read and modified, then you might want to take some more effective counter-measures. If this were the case, you could use a more restrictive WHERE clause on the DELETE and UPDATE commands. The WHERE clause could unequivocally identify the row, but also make sure that all the columns still hold the original value.
UPDATE Employees SET field1=@new_field1, field2=@new_field2, 脙鈥毭傗? fieldn=@new_fieldn WHERE field1=@old_field1 AND field2=@old_field2 AND : fieldn=@old_fieldn
Notice that you don't need to fill all command properties, but only those that you plan to use. If the code happens to use a command that has not been specified, an exception is thrown. Setting up the commands for a batch update process may require a lot of code, but you don't need to do it each and every time you batch update. In a fair number of cases, in fact, ADO.NET is capable of automatically generating effective update commands for you.
To utilize default commands, you have to fulfill two requirements. First off, you must assign a valid command object to the SelectCommand property. You don't need to populate other command objects, but SelectCommand must point to a valid query statement. A valid query for the batch update is a query that returns a primary key column. In addition, the query must not include INNER JOIN, calculated columns, and reference multiple tables.
The columns and the table listed in the SelectCommand object will actually be used to prepare the body of the update and insert statements. If you don't set SelectCommand, then ADO.NET command auto-generation cannot work. The following code shows how to code the SelectCommand property.
SqlCommand cmd = new SqlCommand(); cmd.CommandText = "SELECT employeeid, firstname, lastname FROM Employees"; cmd.Connection = conn; da.SelectCommand = cmd;
Don't worry about the possible impact that SelectCommand may have on performance. The related statement executes only once prior to the batch update process, but it only retrieves column metadata. No matter how you write the SQL statement, no rows will ever be returned to the caller program. This happens because at execution time, the SelectCommand is appended to a SQL batch statement that begins with
SET FMTONLY OFF SET NO_BROWSETABLE ON SET FMTONLY ON
As a result, the query does not return rows, but rather column metadata information.
The second requirement your code must fulfill regards command builders. A command builder is a managed provider-specific class that works atop the data adapter object and automatically sets its InsertCommand, DeleteCommand, and UpdateCommand properties. A command builder first runs SelectCommand to collect enough information about the involved tables and columns, and then creates the update commands. The actual commands creation takes place in the command builder class constructor.
SqlCommandBuilder cb = new SqlCommandBuilder(da);
The SqlCommandBuilder class ensures that the specified data adapter can be successfully used to batch update the given data source. The SqlCommandBuilder utilizes some of the properties defined in the SelectCommand object. They are Connection, CommandTimeout, and Transaction. Whenever any of these properties is modified, you need to call the command builder's RefreshSchema method to change the structure of the generated commands of further batch updates.
You can mix together command builders and handcrafted commands. If the InsertCommand property points to a valid command object prior to calling the command builder, then the builder would generate only the code for DeleteCommand and UpdateCommand. A non-null SelectCommand property, instead, is key for command builders to work.
Typically, you use command builders because you don't want to cope with the intricacies of SQL commands. However, if you want to have a look at the source code generated by the builders, you can call methods like GetInsertCommand, GetUpdateCommand, and GetDeleteCommand.
Command builders are a provider-specific feature. So, you should not expect to find them supported by all types of managed providers. They work with SQL Server 7.0 and higher and OLE DB providers.
A nice feature of command builders is that they can detect auto-increment fields and properly tune up the code. In particular, they would take auto-increment fields out of the INSERT statement as long as they have the means to recognize certain fields as auto-increment. This can be done in two ways. For example, you could manually set the AutoIncrement property of the corresponding DataColumn object, or, better yet, have this happen automatically based on the attributes that the column has in a data source like SQL Server. To automatically inherit such an attribute, make sure you change the MissingSchemaAction property of the data adapter from the default value of Add to AddWithKey.
The batch update mechanism is based on an optimistic vision of concurrency. Each record is not locked after being read and remains exposed to other users for reading and writing. In this scenario, a number of potentially inconsistent situations can occur. For example, a row could have been modified, or even deleted, after it was handed to your application from a SELECT statement, but before a batch update process actually changes it back to the server.
If you update data on the server that has been modified in the meantime by some other user, you may raise a data conflict. To avoid new data being overwritten, the ADO.NET command builders generate statements with a WHERE clause that works only if the current state of the data source row is consistent with what the application previously read . If such a command fails updating the row, the ADO.NET runtime throws an exception of type DBConcurrencyException.
The following code snippet demonstrates a more accurate way to execute a batch update operation with ADO.NET.
try { da.Update(dsChanges, "Employees"); } catch (DBConcurrencyException dbdcex) { // resolve the conflict }
The Update method of the data adapter you are using throws the exception for the first row where the update fails. At this time, the control passes back the to client application and the batch update process is stopped. However, all previously submitted changes are committed. This represents another shift from the ADO batch update model.
The DataRow object involved in the conflicted update is made available through the Row property of the DBConcurrencyException class. This DataRow object contains both the proposed and the original value of the row. It does not contain the value currently stored in the database for a given column. This value鈥攖he UnderlyingValue property of ADO鈥攃an only be retrieved with another query command.
The way in which the conflict is resolved, and the batch update possibly resumed, is strictly application-specific. If there is a situation in which your application needs to resume the update, then be aware of a subtle, yet tricky problem. Once the conflict on the row has been solved in one way or another, you still must figure out a way to accept the changes on the in-memory rows for which batch update completed successfully. If you neglect this technicality, then a new conflict will be raised for the first row that was previously and successfully updated! This will happen over and over again, heading your application straight into a joyful deadlock.
Compared to ADO, batch update in ADO.NET is more powerful and accessible. In ADO, the batch update mechanism was a sort of black box with rare chances for you to plug into it and change what you needed to do in a slightly different way. Batch update in ADO.NET is more of a low-level solution, and its implementation provides several points where you can get in and take control of the events. The trickiest part of ADO.NET batch update is conflict resolution. I heartily suggest you spend as much time as possible testing and retesting. This investment will payoff with all the time you save with command builders.
Dialog Box: Null Values in Data TablesI fetch a DataSet out of a database and I am happy. Then I try to save this DataSet into an XML file and I am still happy. But when I read this XML file back into a DataSet, my happiness ends. This is because all the columns with a NULL value are not persisted to XML. Is there a way by which NULL values get added as empty tags to the resultant XML? The behavior is by design and introduced with the best intentions to save a few bytes during the XML serialization process. If this happens over the network (say, within an XML Web service) the advantage might be significant. That said, your problem has a very simple workaround. The trick is fetching the column through the ISNULL T-SQL function. Instead of using: SELECT MyColumn FROM MyTable You should resort to: SELECT ISNULL(MyColumn, '') FROM MyTable In this case, any NULL value for the column is automatically turned into an empty string and not ignored during the DataSet-to-XML serialization process. The neutral value does not necessarily have to be the empty string. Numeric columns can use 0 or any other logically null value you want to use. |
Dino Esposito
Wintellect
October 9, 2001
When compared to full-fledged OLE DB providers, Microsoft .NET managed providers have a lot to offer. First off, they deliver a simplified data access architecture that often results in improved performance without the loss of functional capabilities. Furthermore, .NET managed providers directly expose provider-specific behavior to consumers through methods and properties. They also involve a much smaller set of interfaces than OLE DB providers. Last but not least, .NET managed providers work within the boundaries of the Common Language Runtime (CLR) and require no COM interaction. For SQL Server 7.0 and SQL Server 2000, the managed provider hooks up directly to the wire level, gaining a substantial performance advantage.
The functionalities that a .NET data provider supplies may fall into a couple of categories:
The simplest flavor of a data provider interacts with callers only through the DataSet, both in reading and writing. In the other case, you can control connections, transactions, and execute direct commands, regardless of the SQL language. The figure below shows the class hierarchy of the two standard managed providers in .NET鈥擮LE DB providers and for SQL Server.
Figure 1. Managed providers connect, execute commands, and get data in a data source-specific way.
The objects that wrap connections, commands, and readers are provider-specific and may result in a slightly different set of properties and methods. Any internal implementation is rigorously database-aware. The only class that is out of this schema is the DataSet. The class is common to all providers and works as a generic container for disconnected data. The DataSet class belongs to a kind of super-namespace called System.Data. Classes specific of a data provider belong to the specific namespace. For example, System.Data.SqlClient and System.Data.OleDb belong to a specific namespace. The schema in the figure above is rather basic, though not simplistic. It is simplified because it does not include all the classes and the interfaces involved. The figure below is a bit more accurate.
Figure 2. Classes involved with a managed provider
The table below shows the list of the interfaces that make a .NET provider.
Interface | Description |
---|---|
IDbConnection | Represents a unique session with a data source |
IDbTransaction | Represents a local, non distributed, transaction |
IDbCommand | Represents a command that executes when connected to a data source |
IDataParameter | Allows implementation of a parameter to a command |
IDataReader | Reads a forward-only, read-only stream of data created after the execution of a command |
IDataAdapter | Populates a DataSet and resolves changes in the DataSet back to the data source |
IDbDataAdapter | Supplies methods to execute typical operations on relational databases (insert, update, select, delete) |
Of all of interfaces, only IDataAdapter is mandatory and must be present in every managed provider. If you don't plan to implement one of the interfaces, or one method of a given interface, expose the interface anyway, but throw a NotSupportedException exception. Wherever possible, avoid providing no-op implementations of methods and interfaces as this may result in data corruption, particularly with the commit/rollback of transactions. For example, providers are not required to support nested transactions even though the IDbTransaction interface is designed to allow also for this situation.
Before going any further with the explanation of the role that each class plays in the overall workings of a .NET provider, let me say a few words about the naming convention that a managed provider is recommended to utilize. This is useful if you happen to write your own providers. The first guideline regards the namespace. Make sure you assign your own managed provider a unique namespace. Next, prefix classes with a nickname that identifies the provider throughout any internal and client code. For example, use class names like OdbcConnection, OdbcCommand, OdbcDataReader, and so on. In this case, the nickname is Odbc. In addition, try to use distinct files to compile distinct functionalities.
The provider connection class inherits from IDbConnection and must expose the ConnectionString, State, Database, and ConnectionTimeout properties. The mandatory methods are Open, Close, BeginTransaction, ChangeDatabase, and CreateCommand. You are not strictly required to implement transactions. The following code snippet gives you an idea of the code used to implement a connection.
namespace DotNetMyDataProvider { public class MyConnection : IDbConnection { private ConnectionState m_state; private String m_sConnString; public MyConnection () { m_state = ConnectionState.Closed; m_sConnString = ""; } public MyConnection (String connString) { m_state = ConnectionState.Closed; m_sConnString = connString; } public IDbTransaction BeginTransaction() { throw new NotSupportedException(); } public IDbTransaction BeginTransaction(IsolationLevel level) { throw new NotSupportedException(); } } }
You should provide at least two constructors, one being the default that takes no argument. The other recommended constructor would accept only the connection string. When returning the connection string through the ConnectionString property, make sure you always return exactly what the user set. The only exception might be constituted by any security-sensitive information that you might want to remove.
The items you recognize and support in the connection string are up to you, but standard names should be used whenever it makes sense. The Open method is responsible for opening the physical channel of communication with the data source. This should happen not before the Open method is called. Consider using some sort of connection pooling if opening a connection turns out to be an expensive operation. Finally, if the provider is expected to provide automatic enlistment in distributed transactions, the enlistment should occur during Open.
An important point that makes ADO.NET connections different from, say, ADO connections, is that you are requested to guarantee that a connection is created and opened before any command can be executed. Clients have to explicitly open and close connections, and no method will open and close connections implicitly for the client. This approach leads to a sort of centralization of security checks. In this way, checks are performed only when the connection is obtained, but the benefits apply to all other classes in the provider that happen to work with connection objects.
You close the connection with the method Close. In general, Close should simply detach the connection and return the object to the pool, if there is a pool. You could also implement a Dispose method to customize the destruction of the object. The state of a connection is identified through the ConnectionState enum data type. While the client works over the connection, you should ensure that the internal state of the connection matches the contents of the State property. So, for instance, when you are fetching data, set the connection's State property to ConnectionState.Fetching.
Let's see how a concrete .NET managed provider turns these principles into practice. For this example, I'll take into account the newest managed provider that appeared, though only in early beta releases. I'm talking about the .NET provider for ODBC data sources. You probably noticed already that the .NET provider for OLE DB does not support the DSN token in the connection string. Such a name is required to automatically select the MSDASQL provider and go through ODBC sources. The following code is how ODBC.NET declares its connection class:
public sealed class OdbcConnection : Component, ICloneable, IdbConnection
The OdbcConnection object utilizes ODBC-typical resources, such as environment and connection handles. These objects are stored internally using class private members. The class provides for both Close and Dispose. In general, you can close a connection with either method, but do it before the connection object goes out of scope. Otherwise, the freeing of internal memory (that is, ODBC handles) is left to the garbage collector, whose timing you cannot control. For connection pooling, the OdbcConnection class relies on the services of the ODBC Driver Manager.
To play with the ODBC.NET provider (currently in beta 1), you should include System.Data.Odbc. At this time, the provider is guaranteed to work with drivers for JET, SQL Server, and Oracle.
The command object formulates a request for some actions and passes it on to the data source. If results are returned, the command object is responsible for packaging and returning results as a tailored DataReader object, a scalar value, and/or through output parameters. According to the special features of your data provider, you can arrange results to appear in other formats. For example, the managed provider for SQL Server lets you obtain results in XML format if the command text includes the FOR XML clause.
The class must support at least the CommandText property and at least the text command type. Parsing and executing the command is up to the provider. This is the key aspect that makes it possible for a provider to accept any text or information as a command. Supporting command behaviors is not mandatory and, if needed, you can support more, completely custom behaviors.
Within a command, the connection can be associated with a transaction. If you reset the connection鈥攁nd users should be able to change the connection at any time鈥攖hen first null out the corresponding transaction object. If you support transactions, then when setting the Transaction property of the command object, consider additional steps to ensure that the transaction you're using is already associated with the connection the command is using.
A command object works in conjunction with two classes representing parameters. They are xxxParameterCollection, which is accessed through the Parameters property, and xxxParameter, which represents a single command parameter stored in the collection. Of course, the xxx stands for the provider-specific nickname. For ODBC.NET, they are OdbcParameterCollection and OdbcParameter.
You create provider-specific command parameters using the new operator on the parameter class or through the CreateParameter method of the command object. Newly created parameters are populated and added to the command's collection through the methods of the Parameters collection. The module that provides for command execution is then responsible for collecting data sets through parameters. Using named parameters (as the SQL Server provider does) or the ? placeholder (similar to the OLE DB provider) is up to you.
You must have a valid and open connection to execute commands. Execute the commands using any of the standard types of commands, which are ExecuteNonQuery, ExecuteReader, and ExecuteScalar. Also, consider providing an implementation for the Cancel and Prepare methods.
The OdbcCommand class does not support passing named parameters with SQL commands and stored procedures. You must resort to the ? placeholder instead. At least in this early version, it does not support Cancel and Prepare either. As you can expect, the ODBC .NET provider requires that the number of command parameters in the Parameters collection matches the number of placeholders found within the command text. Otherwise, an exception is thrown. The line below shows how to add a new parameter to an ODBC command and assign it at the same time.
cmd.Parameters.Add("@CustID", OdbcType.Integer).Value = 99
Notice that the provider defines its own set of types. The enumeration OdbcType includes all and only the types that the low-level API of ODBC can safely recognize. There is a close match between the original ODBC types, such as SQL_BINARY, SQL_BIGINT, or SQL_CHAR, and the .NET types. In particular, the ODBC type SQL_CHAR maps to the .NET String type.
A data reader is a kind of connected, cache-less buffer that the provider creates to let clients read data in a forward-only manner. The actual implementation of the reader is up to the provider's writer. However, a few guidelines should be taken into careful account.
First off, when returned to the user, the DataReader object should always be open and positioned prior to the first record. In addition, users should not be able to directly create a DataReader object. Only the command object must create and return a reader. For this reason, you should mark the constructors as internal. You should use the keyword internal in C#
internal MyDataReader(object resultset) {...}
and the keyword friend in Visual Basic廬 .NET
Friend Sub New(ByRef resultset As object) MyBase.New ... End Sub
The DataReader must have at least two constructors鈥攐ne taking the result set of the query, and one taking the connection object used to carry the command out. The connection is necessary only if the command must execute with the CommandBehavior.CloseConnection style. In this case, the connection must be automatically closed when the DataReader object is closed. Internally, the resultset can take any form that serves your needs. For example, you can implement it as an array or a dictionary.
A DataReader should properly manage the property RecordsAffected. It is only applicable to batch statements that include inserts, updates, or deletes. It normally does not apply to query commands. When the reader is closed, you might want to disallow certain operations and change the reader's internal state, cleaning up internal resources like the array used to store data.
The DataReader's Read method always moves forward to a new valid row, if any. More importantly, it should only place the internal data pointer forward, but makes no reading. The actual reading takes place with other reader-specific methods, such as GetString and GetValues. Finally, NextResult moves to the next result set. Basically, it copies a new internal structure into a common repository from which methods like GetValues read.
As all the reader classes, OdbcDataReader is sealed and not inheritable. Methods of the class that have access to column values automatically coerce the type of data they return to the type of data that was initially retrieved from that column. The type used the first time to read one cell from a given column is used for all the other cells of the same column. In other words, you cannot read data from the same column as string and long in successive times.
When the CommandType property of a command object is set to StoredProcedure, the CommandText property must be set using the standard ODBC escape sequence for procedures. Unlike other providers, the simple name of the procedure is not enough for the ODBC.NET provider. The following pattern represents the typical way of calling stored procedures through ODBC drivers.
{ call storedproc_name(?, ..., ?) }
The string must be wrapped by {...} and have the keyword call to precede the actual name and the list of parameters.
A full-fledged .NET data provider supplies a data adapter class that inherits both IDbDataAdapter and DbDataAdapter. The class DbDataAdapter implements a data adapter designed for use with a relational database. In other cases, though, what you need is a class that implements the IDataAdapter interface and copies some disconnected data to an in-memory programmable buffer like the DataSet. Implementing the Fill method of the IDataAdapter interface, in fact, is in most cases sufficient to return disconnected data through a DataSet object.
Typical constructors for the DataAdapter object are:
XxxDataAdapter(SqlCommand selectCommand) XxxDataAdapter(String selectCommandText, String selectConnectionString) XxxDataAdapter(String selectCommandText, SqlConnection selectConnection)
Classes that inherit from DbDataAdapter must implement all the members, and define additional members in case of provider-specific functionality. This ends up requiring the implementation of the following methods:
Fill(DataSet ds) FillSchema(DataSet ds, SchemaType st) Update(DataSet ds) GetFillParameters()
The required properties are:
You can provide as many implementations of the Fill method as needed.
Table mappings govern the way in which source tables (that is, database tables) are mapped to DataTable objects in the parent DataSet. Mappings take into account table names as well as columns names and properties. Schema mapping, instead, regards the way in which columns and tables are treated when it comes to adding new data to existing DataSets. The default value for the missing mapping property tells the adapter to create in-memory tables that looks like source tables. The default value for the missing schema property handles possible issues that arise when the DataTable objects are actually populated. If any of the mapped elements (tables and columns) are missing in the target DataSet, then the value of MissingSchemaAction suggests what to do. In a certain way, both MissingXXX properties are a kind of exception handler. The value Add forces the adapter to add any table or column that proves to be missing. No key information is added unless another (AddWithKey) value is assigned to the property.
When an application calls the Update method, the class examines the RowState property for each row in the DataSet and executes the required INSERT, UPDATE, or DELETE statement. If the class does not provide UpdateCommand, InsertCommand, or DeleteCommand properties, but implements IDbDataAdapter, then you can try to generate commands on the fly or raise an exception. You could also provide a made-to-measure command builder class to help with the command generation.
The ODBC provider supplies the OdbcCommandBuilder class as a means of automatically generating single-table commands. The OLE DB and SQL Server providers have provided similar classes. If you need to update cross-referenced tables, then you might want to use stored procedures or ad-hoc SQL batches. In this case, just override the InsertCommand, UpdateCommand, and DeleteCommand properties to make them run the command object you indicate.
The functionality that a .NET data provider offers can be divided into two main categories:
Data providers in .NET support DataSet objects through an implementation of the IDataAdapter interface. They may also support parameterized queries by implementing the IDataParameter interface. If you can't afford disconnected data, then use .NET data readers through the IDataReader interface.
Dialog Box: Naming Multiple ResultsetsVisual Studio廬 .NET has a very nice feature that lets you assign a consistent name to all the tables a data adapter is going to generate. After you've configured a data adapter object in any .NET application, the dialog shows the standard names of the tables being created: Table, Table1, Table2, and so forth. For each of them, you can then specify in a single shot a more evocative name. Is there a way to get this programmatically? Visual Studio .NET is an excellent product, but there's only a little bit of magic in what it does. To answer your question, yes, there is a way to obtain that programmatically and, incidentally, it's the same code that Visual Studio utilizes behind the scenes. The DataAdapter object has a collection called TableMappings whose elements are objects of type DataTableMapping. What's a table mapping anyway? It is a dynamic association set between a source table and the corresponding DataTable object that the adapter is going to create. If no mapping has been set, then the adapter creates a DataTable object with the same structure as the source table, except for the name. The name is the string specified through the call to the Fill method, or the word Table. Extra tables that originate from multiple resultsets are named after the first. So, in the default case, they are called Table1, Table2, and the like. Instead, if the data adapter is filled out like the code below, then the extra tables are named Employees1, Employees2, and so forth. myDataAdapter.Fill(myDataSet, "Employees"); What Visual Studio does when you configure your data adapter is create one DataTableMapping object for each association you visually create. The following lines of code are the programmatic way to assign meaningful names to the first two tables of a DataSet filled as above. myDataAdapter.TableMappings.Add("Employees", "FirstTable"); myDataAdapter.TableMappings.Add("Employees1", "SecondTable"); A third table, if any, could be accessed through Table2. While this is the most elegant way to name DataTable objects that originate from multiple resultsets, nothing prevents you from using the following, equally effective, code:myDataAdapter.Fill(myDataSet, "Employees"); myDataSet.Tables["Employees1"].TableName = "SecondTable"; You could also access the table through the index: myDataSet.Tables[1].TableName = "SecondTable"; |
By default .Text uses SQL Server for its database back end. However, .Text was designed with a provider model to support different database back ends. In 2003, John Kaster presented Delphi 8 to our local users group. Afterwards he issued a challenge, and I accepted. The challenge was to support InterBase with .Text. The result is blogs.borland.com and blogs.teamb.com.
.Text provides two interfaces to make replaceable database backends possible.
This interface defines approximately 65 methods needed to select, insert, update, and delete records for various tables in the database. Data is retrieved from select
statements using standard ADO.NET Interfaces and classes. The IDataReader interface is returned for many of the select statements. The rest are returned in a DataSet.
For the most part .Text does not directly access data that is returned from a class that implements Dottext.Framework.Data.IDbProvider. Instead it calls a class that implements Dottext.Framework.Data.IDTOProvider, .Text provides a default implementation of this class which it uses for SQL Server called Dottext.Framework.Data.DataDTOProvider. It is responsible for taking the raw data and mapping it into objects that .Text has defined for each type of data.
After converting the database schema to InterBase, (Minus the 60+ stored procedures that .Text has for its MS SQL Server implementation) I started by implementing a custom IDbProvider in Delphi for .NET. This provider Implemented all of the database calls with SQL statements instead of using stored procedures.
One problem I ran into was the way .Text used several procedures that returned multiple cursors. The only way to duplicate this behavior for InterBase was to fill a DataSet using Multiple SQL Statements.
var
Cmd : BDPCommand;
DA : BDPDataAdapter;
ResultData : DataSet;
begin
ResultData := DataSet.Create;
...
Cmd.CommandText := 'SELECT * FROM TABLEA';
...
DA := BdpDataAdapter.Create(Cmd);
DA.Fill(ResultData);
DA.Free;
...
Cmd.CommandText := 'SELECT * FROM TABLEB';
...
DA := BdpDataAdapter.Create(Cmd);
DA.Fill(ResultData.Tables.Add);
DA.Free;
...
end;
I also created a new class that implemented the Dottext.Framework.Data.IDTOProvider using C#, to address data transformation issues. I used C# for this class as it was almost the same as the original one provided by .Text so I able to copy, rename, modify instead of writing it from scratch.
function BdpDataProvider.GetDbConnection: BdpConnection;
begin
if Not Assigned(FConnection) then
FConnection := BdpConnection.Create(ConnectionString);
end;
The number of possible page request threads your ASP.NET application can have is partially controlled by how you have configured your system, and also is further controlled by how you have configured you application. The following article explains many of the details on how threading works in ASP.NET.
Specifically remember to call close in the correct order.
BdpReader.Close;
BdpCommand.Close;
BdpConnection.Close;
.Text was written with C# in Visual Studio 2003. My IDBProvider was written in Delphi for .NET. Initially, to compile the application, I had to first compile the .Text Solution in Visual Studio. Then I used Delphi 8 for .Net to compile the Provider. .Text does not know about the Delphi Assembly at compile time: it dynamically loads the provider using a value stored in the web.config file. Initially, I was unsure of what to expect when debugging my Delphi Assembly when a Visual Studio application was the main project. I found out that as long as I include debug information, it is possible to step through your Delphi code in Visual Studio.
Now that Delphi 2005 that combines both Delphi and C#, we were able to use the Visual Studio project Import wizard to import the existing projects. After setting up all the projects into a single Project Group and I am able to compile the entire blogging application inside Delphi, with no need to use Visual Studio at all on the project.
In addition to writing a custom provider, I modified the security system in .Text so that Borland Employees can use their BDN account to administer their .Text blogs, instead of having another user id/password to maintain. Currently this is done by modifying security.cs
from the .Text code base, but I hope to change this to be a provider-based model similar to the database provider model in use now.
If you are logged into BDN when you submit feedback, your name will auto-populate with your BDN account information. In the future, the feedback may be changed to require that you log into your BDN account to comment, if comment spam becomes a problem as it has for some other popular blog servers.
During this exercise, I was able to prove to myself that Delphi is a first class citizen in the world of the .Net framework. It was able to work in a mixed language environment with out any problems. I also found that BDP is a good database solution that implements the ADO.NET interfaces correctly.
Robert Love - http://peakxml.com (My personal blog)
Side Note: If you are Delphi user interested in creating a technical blog for yourself, visit blogs.slcdug.org. we still have enough bandwidth for quite a few more active bloggers.
NOTE: The views and information expressed in this document represent those of its author(s) who are solely responsible for its content. Borland does not make or give any representation or warranty with respect to such content.