Interview Questions - Part1
Conditions for a garbage collection
What triggers the garbage collector
Garbage collection occurs when one of the following conditions is true:
The system has low physical memory. The memory size is detected by either the low memory notification from the operating system or low memory as indicated by the host.
The memory that's used by allocated objects on the managed heap surpasses an acceptable threshold. This threshold is continuously adjusted as the process runs.
The GC.Collect method is called. In almost all cases, you don't have to call this method because the garbage collector runs continuously. This method is primarily used for unique situations and testing.
Is IDisposable.Dispose() called automatically?
Dispose()
will not be called automatically. If there is a finalizer it will be called automatically. Implementing IDisposable
provides a way for users of your class to release resources early, instead of waiting for the garbage collector.
The preferable way for a client is to use the using
statement which handles automatic calling of Dispose()
even if there are exceptions.
A proper implementation of IDisposable
is:
class MyClass : IDisposable
{
private bool disposed = false;
void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if(!disposed)
{
if(disposing)
{
// Manual release of managed resources.
}
// Release unmanaged resources.
disposed = true;
}
}
~MyClass() { Dispose(false); }
}
If the user of the class calls Dispose()
the cleanup takes place directly. If the object is catched by the garbage collector, it calls Dispose(false)
to do the cleanup. Please note that when called from the finalizer (the ~MyClass
method) managed references may be invalid, so only unmanaged resources can be released.
Async await vs TaskParallelLibrary
I believe the TPL (TaskFactory.Startnew) works similar to ThreadPool.QueueUserWorkItem in that it queues up work on a thread in the thread pool.
From what i've been reading it seems like async/await only "sometimes" creates a new thread.
Actually, it never does. If you want multithreading, you have to implement it yourself. There's a new Task.Run
method that is just shorthand for Task.Factory.StartNew
, and it's probably the most common way of starting a task on the thread pool.
If you were dealing with IO completion ports i can see it not having to create a new thread but otherwise i would think it would have to.
Bingo. So methods like Stream.ReadAsync
will actually create a Task
wrapper around an IOCP (if the Stream
has an IOCP).
You can also create some non-I/O, non-CPU "tasks". A simple example is Task.Delay
, which returns a task that completes after some time period.
The cool thing about async
/await
is that you can queue some work to the thread pool (e.g., Task.Run
), do some I/O-bound operation (e.g., Stream.ReadAsync
), and do some other operation (e.g., Task.Delay
)... and they're all tasks! They can be awaited or used in combinations like Task.WhenAll
.
Any method that returns Task
can be await
ed - it doesn't have to be an async
method. So Task.Delay
and I/O-bound operations just use TaskCompletionSource
to create and complete a task - the only thing being done on the thread pool is the actual task completion when the event occurs (timeout, I/O completion, etc).
I/O completion ports provide an efficient threading model for processing multiple asynchronous I/O requests on a multiprocessor system. When a process creates an I/O completion port, the system creates an associated queue object for threads whose sole purpose is to service these requests. Processes that handle many concurrent asynchronous I/O requests can do so more quickly and efficiently by using I/O completion ports in conjunction with a pre-allocated thread pool than by creating threads at the time they receive an I/O request.
How I/O Completion Ports Work
The CreateIoCompletionPort function creates an I/O completion port and associates one or more file handles with that port. When an asynchronous I/O operation on one of these file handles completes, an I/O completion packet is queued in first-in-first-out (FIFO) order to the associated I/O completion port. One powerful use for this mechanism is to combine the synchronization point for multiple file handles into a single object, although there are also other useful applications. Please note that while the packets are queued in FIFO order they may be dequeued in a different order.
I guess my understanding of FromCurrentSynchronizationContext always was a bit fuzzy also. I always throught it was, in essence, the UI thread.
I wrote an article on SynchronizationContext
. Most of the time, SynchronizationContext.Current
:
is a UI context if the current thread is a UI thread.
is an ASP.NET request context if the current thread is servicing an ASP.NET request.
is a thread pool context otherwise.
Any thread can set its own SynchronizationContext
, so there are exceptions to the rules above.
Note that the default Task
awaiter will schedule the remainder of the async
method on the current SynchronizationContext
if it is not null; otherwise it goes on the current TaskScheduler
. This isn't so important today, but in the near future it will be an important distinction.
I wrote my own async
/await
intro on my blog, and Stephen Toub recently posted an excellent async
/await
FAQ.
Regarding "concurrency" vs "multithreading", see this related SO question. I would say async
enables concurrency, which may or may not be multithreaded. It's easy to use await Task.WhenAll
or await Task.WhenAny
to do concurrent processing, and unless you explicitly use the thread pool (e.g., Task.Run
or ConfigureAwait(false)
), then you can have multiple concurrent operations in progress at the same time (e.g., multiple I/O or other types like Delay
) - and there is no thread needed for them. I use the term "single-threaded concurrency" for this kind of scenario, though in an ASP.NET host, you can actually end up with "zero-threaded concurrency". Which is pretty sweet.
Why can't I use the 'await' operator within the body of a lock statement?
DiamondProblem
using System;
namespace CSharpConsoleApp.DiamondProblemExample
{
interface IA
{
void PrintIA();
}
interface IB
{
void PrintIB();
}
interface IC
{
void PrintIC();
}
public class A : IA
{
public void PrintIA()
{
Console.WriteLine("PrintIA method from class A.");
}
}
public class B:IB
{
public void PrintIB()
{
Console.WriteLine("PrintIB method from class B.");
}
}
public class C:IC
{
public void PrintIC()
{
Console.WriteLine("PrintIC method from class C.");
}
}
public class D: IA, IB,IC
{
public void PrintIA()
{
Console.WriteLine("PrintIA method from class D.");
}
public void PrintIB()
{
Console.WriteLine("PrintIB method from class D.");
}
public void PrintIC()
{
Console.WriteLine("PrintIC method from class D.");
}
}
class DiamondProblemExample
{
static void Main(string[] args)
{
D obj = new D();
obj.PrintIA();
obj.PrintIB();
obj.PrintIC();
Console.ReadLine();
}
}
}
using System;
namespace CSharpConsoleApp.DiamondProblemExample
{
public class A
{
public virtual void Print()
{
Console.WriteLine("Print method of class A.");
}
}
public class B: A
{
public override void Print()
{
Console.WriteLine("Print method of class B");
}
}
public class C: A
{
public override void Print()
{
Console.WriteLine("Print method of class C");
}
}
// Error: D class can not have multiple base classes.
public class D: C, B
{
}
class DiamondProblemExample
{
static void Main(string[] args)
{
D obj = new D();
obj.Print();
}
}
}