AJama

I want a singleton wrapper so that I can avoid the .instance.method call syntax.

I want calls like

SingletonWrapper.fn ();

instead of

Singleton.Instance().fn();

the code bellow is my first attempt but I think there must be a better way of doing it.

thanks

public class SingletonWrapper

{

public static string fn()

{

return Singleton.Instance().Foo();

}

private class Singleton

{

private static Singleton instance;

protected Singleton(){}

public static Singleton Instance()

{

// Use 'Lazy initialization'

if (instance == null)

{

instance = new Singleton();

}

return instance;

}

public string Foo()

{

return "This is it";

}

}

}



Re: Visual C# General Improve this singleton wrapper

Caddre

There is try the link below for details.

http://www.yoda.arachsys.com/csharp/singleton.html






Re: Visual C# General Improve this singleton wrapper

frederikm

Hi

in the code above you are kind of duplicating the singleton...

take a look at the following:

Code Snippet

public class Something{

private static Something _Instance = new Something();

public static String SomeMethod{

return _instance.SomeOtherMethod;

}

String SomeOtherMethod{

return "gimmegimme";

}

protected Something(){}

}

some remarks:

- this is an implementation of a singleton, even if it doesn't have a public static property called Instance

as there can be but one instance at any time

- the static string SomeMethod has a different name than the instance method, as c# doesn't allow you to access static methods via instances

- this allows you to just call Something.SomeMethod

- you could create an additional method CreateInstance() that would need to be protected or private and called before passing a call to the instance field

Code Snippet

protected void CreateInstance(){

if(_instance == null){

_Instance = new Something()

}

}

However, this method is not really thread safe, as two threads can be accessing the CreateInstance method at the same time

what you really need to do is:

Code Snippet

private Object _lock = new Object();

protected void CreateInstance(){

if(_instance == null){

lock(_lock){

_Instance = new Something()

}

}

}

The code above comes from Jeffrey Richter's CLR via C# 2.0 (recommend reading)

Hope this helps you out






Re: Visual C# General Improve this singleton wrapper

Peter Ritchie

frederikm wrote:

Hi

in the code above you are kind of duplicating the singleton...

take a look at the following:

Code Snippet

public class Something{

private static Something _Instance = new Something();

public static String SomeMethod{

return _instance.SomeOtherMethod;

}

String SomeOtherMethod{

return "gimmegimme";

}

protected Something(){}

}

some remarks:

- this is an implementation of a singleton, even if it doesn't have a public static property called Instance

as there can be but one instance at any time

- the static string SomeMethod has a different name than the instance method, as c# doesn't allow you to access static methods via instances

- this allows you to just call Something.SomeMethod

- you could create an additional method CreateInstance() that would need to be protected or private and called before passing a call to the instance field

Code Snippet

protected void CreateInstance(){

if(_instance == null){

_Instance = new Something()

}

}

However, this method is not really thread safe, as two threads can be accessing the CreateInstance method at the same time

what you really need to do is:

Code Snippet

private Object _lock = new Object();

protected void CreateInstance(){

if(_instance == null){

lock(_lock){

_Instance = new Something()

}

}

}

The code above comes from Jeffrey Richter's CLR via C# 2.0 (recommend reading)

Hope this helps you out

The double-check lock pattern is not thread safe on all platforms either, unless _instance is declared volatile (see http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/). If you're interested in a low-lock singleton, Jon Skeet's at http://www.yoda.arachsys.com/csharp/singleton.html is the one to look at.






Re: Visual C# General Improve this singleton wrapper

Thomas Danecker

The double-checked locking pattern is thread safe in .net 2.0 (it'd be not thread safe in the ECMA memory model). Here's the excerpt from http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx:

"Like all techniques that remove read locks, the code in Figure 7 relies on strong write ordering. For example, this code would be incorrect in the ECMA memory model unless myValue was made volatile because the writes that initialize the LazyInitClass instance might be delayed until after the write to myValue, allowing the client of GetValue to read the uninitialized state. In the .NET Framework 2.0 model, the code works without volatile declarations."






Re: Visual C# General Improve this singleton wrapper

Peter Ritchie

Thomas Danecker wrote:

The double-checked locking pattern is thread safe in .net 2.0 (it'd be not thread safe in the ECMA memory model). Here's the excerpt from http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx:

"Like all techniques that remove read locks, the code in Figure 7 relies on strong write ordering. For example, this code would be incorrect in the ECMA memory model unless myValue was made volatile because the writes that initialize the LazyInitClass instance might be delayed until after the write to myValue, allowing the client of GetValue to read the uninitialized state. In the .NET Framework 2.0 model, the code works without volatile declarations."

Joe Duffy's interpretation of that suggests you still need volatile for IA64 "The 2.0 memory model does not use ld.acqs unless you are accessing volatile data (marked w/ the volatile modifier keyword or accessed via the Thread.VolatileRead API)." [1]; although he seems contradictory, or that there's heuristics to detect double-checked lock patterns. None of this is in any specification.

But, other's have read Joe's blog and Vance's article and have come to the same conclusion:
http://geekswithblogs.net/akraus1/articles/90803.aspx
http://www.yoda.arachsys.com/csharp/singleton.html
http://blogs.msdn.com/cbrumme/archive/2003/05/17/51445.aspx

Honestly, I don't consider Vance's article a description of what a standards-compliant framework must do. Mostly because it's just a magazine article, but it's contradicted by the only spec we have and by others at Microsoft.

[1] http://www.bluebytesoftware.com/blog/PermaLink,guid,543d89ad-8d57-4a51-b7c9-a821e3992bf6.aspx






Re: Visual C# General Improve this singleton wrapper

Thomas Danecker

I read somewhere (maybe it was the CLR specification but I've to look it up) that it also works on an IA64 architecute because the memory model is very restrictive in this case (what leads to performance penalties but the CLR favors working code on all platforms over performance). I'll look it up and post details on it, but it will take some time because I'm currently quite busy.




Re: Visual C# General Improve this singleton wrapper

Thomas Danecker

Here's my answer (sooner than expcected):

Here's the summary of the applied rules (all from the link to http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx):

The fundamental rules:

  1. The behavior of the thread when run in isolation is not changed. Typically, this means that a read or a write from a given thread to a given location cannot pass a write from the same thread to the same location.
  2. Reads cannot move before entering a lock. (This implies an invalidation of the cache at the beginning of the lock. Otherwise reads would move back in time to the fetching of the cache-line which may be before entering the lock.)
  3. Writes cannot move after exiting a lock. (This implies a flushing of the cache at exiting a lock. Otherwise writes would move forward in time to the flushing of the cahce-line which may be after exiting the lock.)

The ECMA memory model:

  1. Reads and writes cannot move before a volatile read. (Implies invalidating the cache.)
  2. Reads and writes cannot move after a volatile write. (Implies flushing the cahce.)

These definition also limits code-reording done by the compiles (language-to-managed and managed-to-native compilers).

So I do not agree with Joe Duffy who assumes that the read at "return instance;" may be reordered prior to "!initialized" what's impossible because reads and writes cann't move before entering the lock so the read of instance (after the lock) can't move before the read of initialized (before the lock).

Assume we have the following code:

Code Snippet

class Singleton

{

static object syncObject = new object();

static bool initialized; // implicitly initialized to false

static Singleton instance; // implicitly initialized to null

public static Singleton Instance

{

get

{

if(!initialized) // reading from the cache (maybe an old value)

{

lock(syncObject) // invalidating the cache

{

if(!initialized) // reading the value fetched after the lock

{

instance = new Singleton(); // writing to the cache

initialized = true; // writing to the cache

}

} // flushing the cache (future reads will read the new value

// after invalidating the cache what's done at entering the lock)

}

return instance;

}

}

private Singleton()

{

}

}

The CLR specifies that the fields (syncObject, initialized and instance) are initialized prior to using them (the first read). This implies, that the cache is flushed and invalidated after the CLR initializes them (ensured by the CLR through the lock(typeof(Singleton)) ), so we'll have no problems with the initialization.

I'll also want to show up the differences and effect of the IA64 memory model:

The x86 memory model (and thus also the downward compatible x64 memory model) specifies that "every write has release semantics" meaning that it will be notices by every read after invalidating the cache.

This semantic isn't spefied by the ECMA memory model and the IA64 architecture also doesn't specify this semantic, but (as stated by Jeffrey Richter in his book "CLR via C#") Microsoft's implementation of the CLR specifies again that "every write has release semantics" (ensured by the JIT-compiler for the IA64 architecture).






Re: Visual C# General Improve this singleton wrapper

Thomas Danecker

I've to retract my statement: Joe Duffy is correct with his statements.

Assume the value of instance (=null) was loaded into the cache but the loaded cache-line doesn't include initialized. Now another thread on another CPU and another cache is initializing the singleton and so writing initialized=true and instance = new Singleton() to the memory (flushing it's cache). Now the first thread is entering the getter. The value of initialized doesn't exist in the cache so it's loaded (initialized = true) but this load doesn't load instance (because instance is on another cache-line which was already loaded). The value of instance (=null) exists still in the cache because of the previous fetch. In this very rare case "null" will be returned.

This behaviour also may occur on non IA64 architectures. The only requisite for that bug is a system with more than one cache.

A volatile read of initialized wouldn't help in this case because it wouldn't load instance into the cache. We have to do a volatile read of instance to ensure that the double checked locking pattern is thread safe.

I'll post a more comprehensible answer later when I'm not that busy.






Re: Visual C# General Improve this singleton wrapper

Peter Ritchie

Thomas Danecker wrote:

The fundamental rules:

  1. The behavior of the thread when run in isolation is not changed. Typically, this means that a read or a write from a given thread to a given location cannot pass a write from the same thread to the same location.
  2. Reads cannot move before entering a lock. (This implies an invalidation of the cache at the beginning of the lock. Otherwise reads would move back in time to the fetching of the cache-line which may be before entering the lock.)
  3. Writes cannot move after exiting a lock. (This implies a flushing of the cache at exiting a lock. Otherwise writes would move forward in time to the flushing of the cahce-line which may be after exiting the lock.)

The ECMA memory model:

  1. Reads and writes cannot move before a volatile read. (Implies invalidating the cache.)
  2. Reads and writes cannot move after a volatile write. (Implies flushing the cahce.)

These definition also limits code-reording done by the compiles (language-to-managed and managed-to-native compilers).

I'll also want to show up the differences and effect of the IA64 memory model:

The x86 memory model (and thus also the downward compatible x64 memory model) specifies that "every write has release semantics" meaning that it will be notices by every read after invalidating the cache.

This semantic isn't spefied by the ECMA memory model and the IA64 architecture also doesn't specify this semantic, but (as stated by Jeffrey Richter in his book "CLR via C#") Microsoft's implementation of the CLR specifies again that "every write has release semantics" (ensured by the JIT-compiler for the IA64 architecture).

Vance's rules 1 and 2 and the two ECMA points that you quote are only always true with respect to processor write-caching (which is what "acquire semantics" and "release semantics" deals with, in my opinion) and not with compiler optimization. For example, given the following two methods the JIT will generate an identical instruction stream on the x86 (I qualify "x86" because I don't have access to a IA64 to verify):

Code Snippet

// Ensure the method isn't inlined to we can be sure a debugger can show us

// the dissassembly of the JIT-generated instructions

internal class SomeClass {

volatile int volNum;

[System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]

public int Member2 ( )

{
int value = 5;
value = 10;
volNum = 6;
return value;
}
[System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]
public int Member2a ( )
{
volNum = 6;
return 10;
}

}

...clearly the the write of 10 to value has moved after a volatile write. That doesn't violate those rules if they only apply to flushing the processor's write-cache because there was never a processor instruction to write the value 10 to value before the volatile write and VolatileWrite likely calls MemoryBarrier or uses appropriate processor-specific instructions to flush the write-cache.






Re: Visual C# General Improve this singleton wrapper

Peter Ritchie

Thomas Danecker wrote:

I've to retract my statement: Joe Duffy is correct with his statements.

Assume the value of instance (=null) was loaded into the cache but the loaded cache-line doesn't include initialized. Now another thread on another CPU and another cache is initializing the singleton and so writing initialized=true and instance = new Singleton() to the memory (flushing it's cache). Now the first thread is entering the getter. The value of initialized doesn't exist in the cache so it's loaded (initialized = true) but this load doesn't load instance (because instance is on another cache-line which was already loaded). The value of instance (=null) exists still in the cache because of the previous fetch. In this very rare case "null" will be returned.

This behaviour also may occur on non IA64 architectures. The only requisite for that bug is a system with more than one cache.

A volatile read of initialized wouldn't help in this case because it wouldn't load instance into the cache. We have to do a volatile read of instance to ensure that the double checked locking pattern is thread safe.

I'll post a more comprehensible answer later when I'm not that busy.

Actually, even Vance shows (in http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx) that your example (using a bool to test if an instance is initialized) is not thread safe, see the last three paragraphs of Technique 4: Lazy Initialization.




Re: Visual C# General Improve this singleton wrapper

Thomas Danecker

Peter Ritchie wrote:

Vance's rules 1 and 2 and the two ECMA points that you quote are only always true with respect to processor write-caching (which is what "acquire semantics" and "release semantics" deals with, in my opinion) and not with compiler optimization. For example, given the following two methods the JIT will generate an identical instruction stream on the x86 (I qualify "x86" because I don't have access to a IA64 to verify):

Code Snippet

// Ensure the method isn't inlined to we can be sure a debugger can show us

// the dissassembly of the JIT-generated instructions

internal class SomeClass {

volatile int volNum;

[System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]

public int Member2 ( )

{
int value = 5;
value = 10;
volNum = 6;
return value;
}
[System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]
public int Member2a ( )
{
volNum = 6;
return 10;
}

}

...clearly the the write of 10 to value has moved after a volatile write. That doesn't violate those rules if they only apply to flushing the processor's write-cache because there was never a processor instruction to write the value 10 to value before the volatile write and VolatileWrite likely calls MemoryBarrier or uses appropriate processor-specific instructions to flush the write-cache.

It does not only apply on processor-caches. Try to make value a static field (nonvolatile) and you'll have different code. Local variables and arguments are not subject to the multi-threaded memory model and so the rules have no value to them. Only globally visible memory (static variables, fields of classes, etc.) are subject of this rules.

Also in your example the volatile declaration of volNum has no affect because in the .net 2.0 memory model every write is a volatile write and if you make value a static field, your code would be compiled to native code as is, meaning that there will be no optimizations at all (including the directly following writes to value (int value = 5; value = 10; ).

Peter Ritchie wrote:

Actually, even Vance shows (in http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx) that your example (using a bool to test if an instance is initialized) is not thread safe, see the last three paragraphs of Technique 4: Lazy Initialization.

Yes, you're right. I didn't understand correctly what Vance has meant.

To make it threadsafe, there must be a volatile read of initialized (as described by Joe Duffy).






Re: Visual C# General Improve this singleton wrapper

Peter Ritchie

Thomas Danecker wrote:

It does not only apply on processor-caches. Try to make value a static field (nonvolatile) and you'll have different code. Local variables and arguments are not subject to the multi-threaded memory model and so the rules have no value to them. Only globally visible memory (static variables, fields of classes, etc.) are subject of this rules.

That's not what rules in either reference say, they specifically mention writes to memory (neither heap nor stack, all memory). Besides, why should stack variables be excluded from JIT optimization restrictions They can be used by multiple threads at the same time. Take this example:

Code Snippet
public int Method( )
{
double value1 = 3.1415;
int value2 = 42;
IAsyncResult result = BeginBackgroundOperation(ref value1, ref value2);

// Sit in a loop waiting for up to 250ms at a time
// doing something with the double value...
do
{
value2 = 5;
// doubles aren't atomic, we need to use
// VolatileRead to read the "latest written" value and
// because BeginBackgroundOperation uses
// Thread.VolatileWrite(ref double).
double temp = Thread.VolatileRead(ref value1);
Thread.Sleep(value2);
// ...
} while (!result.AsyncWaitHandle.WaitOne(250, false));
return 1;
}

If stack variables (locals) were except from such rules the JIT could optimize that as follows:

Code Snippet
public int Method( )
{
double value1 = 3.1415;
int value2 = 42;
IAsyncResult result = BeginBackgroundOperation(ref value1, ref value2);

do
{
double temp = Thread.VolatileRead(ref value1);
Thread.Sleep(5);
} while (!result.AsyncWaitHandle.WaitOne(250, false));
return 1;
}

...not good.

Thomas Danecker wrote:

Also in your example the volatile declaration of volNum has no affect because in the .net 2.0 memory model every write is a volatile write and if you make value a static field, your code would be compiled to native code as is, meaning that there will be no optimizations at all (including the directly following writes to value (int value = 5; value = 10; ).

I'm not following you, if every write were volatile no optimizations could occur, based on Vance's rules and ECMA's rules; if they affected compiler optimizations in addition to flushing processor write-caches where they exist. The volatile declaration may no effect on x86; but Joe Duffy has said "volatile" does have an effect on IA64.

And why should I have to leak implementation details of a method into the interface of my class Yes, a static would change the code; but I could change the code in any number of ways to get it to change what the JIT generates.

The point is, it shows the "fundamental rules" can only be observed as being followed *if* they only apply to writes that are cached by the processor.






Re: Visual C# General Improve this singleton wrapper

Thomas Danecker

There is no way to access local variables or arguments from another thread.

I've provided my own sample code because your sample is not complete and it seems you haven't tried it out.

Code Snippet

delegate void Method(ref int i);

static void Main()
{
int local = 3;
Method m = Test;

IAsyncResult result = m.BeginInvoke(ref local, null, null);
Thread.Sleep(1000);
Console.WriteLine(local);
local = 5;
m.EndInvoke(ref local, result);
Console.WriteLine(local);
Console.ReadKey();
}
static void Test(ref int argument)
{
argument = 4;
Thread.Sleep(2000);
Console.WriteLine(argument);
}

With your assumptions the output should be the following:

4

5

5

But the output is actually this:

3

4

4

You should know about asynchrouous operations that out and ref params are only passed back at EndXxx.






Re: Visual C# General Improve this singleton wrapper

Peter Ritchie

Thomas Danecker wrote:

There is no way to access local variables or arguments from another thread.

I was thinking in the PInvoke case; but yes that would be similar, the address would be marshaled an the other thread wouldn't be directly accessing that thread's stack. I think you might be able to do it with an unsafe method in C#; but, even if you could it's not a good idea (neither was my original example if it *could* access another thread's stack). It's moot anyway; there's still code that shows the applicable rules in both the CIL spec and Vance's article are being violated if they're viewed in the context of both processor write-cache flushing AND scope of JIT optimizations. If you remove the scoping of JIT optimizations, you no longer have a violation. I'd rather view the framework as not violating either memory model and put "volatile" in there.

Declaring the instance variable in Vance's lazy init (double-check lock pattern) doesn't make it any less thread-safe...