Friday, December 21, 2012

WebForms coding standards

Recently I am working with a WebForms environment, and each time when I go back to WebForms stack, I am surprise to realize that programmers that are in this world do not realize so many things that are obvious to other frameworks. Like naming conventions. Why do I see things like:
txtFirstName
btnSubmit
lblLastName
This is totally wrong, and against Microsoft coding standards. I also see this in WinForms environment, programmers somehow do not realize that by doing it you:
Code to a type/control not to a content, you should be able to change the control easily from any type to any type without working of a variable name, here when you change a type from label to literal you need to change a name everywhere you use it.
Besides that there are so many new controls, or custom controls, it is so hard to create a prefix for each of them, and if you use prefix only for a buildin/default controls you are being inconsistent and code looks like a crap.

Thursday, December 20, 2012

Resharper and the end of the world

21 December 2012, is the last day of the world. That's why for last 3 hours I am trying to upgrade my Resharper from version 6 to version 7, because the price is 75% off. Unfortunately for the last 3 hours the only response that I got was
But I see a progress! Now I am getting:
I don't have much time - it's still 20 more hours to go till the end of the world.

Monday, December 17, 2012

RSA Security commonly uses keys of sizes 1024-bit, 2048-bit or even 3072-bit. And most Symmetric algorithms only between 112-bit and 256-bit.

The ultimate question is should I use a longer key. And ladies and gentleman, here is an answer from Bruce Schneier's book

Longer key lengths are better, but only up to a point. AES will have 128-bit, 192-bit, and 256-bit key lengths. This is far longer than needed for the foreseeable future. In fact, we cannot even imagine a world where 256-bit brute force searches are possible. It requires some fundamental breakthroughs in physics and our understanding of the universe.

One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

Given that k = 1.38 × 10−16 erg/K, and that the ambient temperature of the universe is 3.2 Kelvin, an ideal computer running at 3.2 K would consume 4.4 × 10−16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

Now, the annual energy output of our sun is about 1.21 × 1041 ergs. This is enough to power about 2.7 × 1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.

But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.


An excellent explanation

Thursday, December 13, 2012

Disposable interface - how do I know

Many developers that I met had a problem to figure out if a class that they used implemented IDisposable interface. Consider a following code:
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
public partial class ContentHubDataCacheSoapClient : System.ServiceModel.ClientBase, ContentHubDataCacheSoap {
}
When I create an instance of ContentHubDataCacheSoapClient should I dispose it?

What Developers typically did is used Object Browser to see if there is a method called Dispose, see a picture below for an example matching a code above :)
It's easy to see that there is no Dispose method listed there, but I circled in the red where one can see if a class implements IDisposable interface.

Other developers to answer a question suggested that IDisposable can be satisfy by implementing a Close method (this is not true, see code example below).

But why Dispose method was not listed in a list of methods. The answer is as simple as to understand that one can implement a method explicitly naming an interface that requires it, and thanks to that it will not be listed in Object Browser. An example is below.
public class DisposingClass : IDisposable
    {
        public void Dispose() { }
    }

    public class ClosingClass{
        public void Close() { }
    }

    public class ImplementingDisposableInterfaceClass : IDisposable {
        void IDisposable.Dispose() {
            Close();
        }

        public void Close() { }
    }

    public class ChildClass : ImplementingDisposableInterfaceClass { }

    public class UsingClass {
        public void UsingMethod() {
            // Compilation time exception, IDisposable needs to implement IDisposable.
            using (var c = new ClosingClass()) { 
            }

            // Typical way of implementing Disposable.
            using (var d = new DisposingClass())
            {
            }

            using (var d = new ImplementingDisposableInterfaceClass())
            {
            }

            // Dispose method will not be showed in Object Browser
            using (var d = new ChildClass())
            {
            }
        }
    }

Tuesday, December 11, 2012

Dealing with a hanged SQL Backup

The backup process is triggered by a SQL Server Job. One can see what SQL Server Jobs are currently running by executing a following query:
exec msdb..sp_help_job @execution_status = 1
In order to see all the queries that are executed we can use sp_who, inside cmd column we should see a BACKUP string. sp_who query also enables us to know what SPID the backup process has
exec sp_who
And a query below shows what is a status of a backup process - like displays estimated completion, and its time.
SELECT r.session_id,r.command,CONVERT(NUMERIC(6,2),r.percent_complete)
AS [Percent Complete],CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2,
CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle)))
FROM sys.dm_exec_requests r WHERE command IN ('RESTORE DATABASE','BACKUP DATABASE')

Files created by jobs run by Task Scheduler

On Windows Server 2008 R2 Enterprise 64bit, if a Task Scheduler run a task that creates a folder or a file in a current execution path it will be created in:
C:\Windows\SysWOW64\
It means that if you run in a task a following code:
using (var file = new System.IO.StreamWriter("foo.txt"))
{
 file.WriteLine("bar");
}
The file "foo.txt" will be created in a following path
C:\Windows\SysWOW64\foo.txt

Monday, December 10, 2012

Using, Exceptions and Ctrl+C

Many people believe that when they use a using statement a Dispose method is always going to be called, and they are safe to clear resources there. For investigation reasons I also included Catch and Finally statements, because Using statement can be thought of as a following code:
try
{
    // A code inside using statement
}finall{
    // A dispose method
}
As seen in an example above using is not rethrowing an exception, or stopping it, it is just letting it go.

Let us consider a code below:
static void Main(string[] args)
{
    Console.CancelKeyPress += new ConsoleCancelEventHandler(myHandler);

    try
    {
        using (new DisposableClass())
        {
            while (true) { 

            }
        }
    }
    catch
    {
        Console.WriteLine("Catch");
    }
    finally {
        Console.WriteLine("Finally");
    }
}

protected static void myHandler(object sender, ConsoleCancelEventArgs args)
{
    Console.WriteLine("myHandler intercepted");
}
Where a DisposableClass is listed below
public class DisposableClass : IDisposable
{
    public void Dispose() {
        Console.WriteLine("Dispose");
    }
}
Many people forget that there are other methods to interrupt an application than exceptions (i.e. forcefully aborting a thread). An example is an good old SIGINT signal, also known as Ctr+C. If application is executing a code above, and is in a while loop. And someone presses Ctr+C than Dispose method, catch, finally blocks are not going to be called. If there is a need to intercept this signal, one needs to signup for a CancelKeyPress event, just as in a code above. In other words, after pressing Ctr+C the only line that is going to be displayed is:
myHandler intercepted
A next example of not executing Using/Catch/Finally block is by running
Thread.Abort()
For majority of cases the block will be executed, but when a thread is nearly finished, and it entered a Finally block, and then someone calls Thread.Abort(), then Finally block is not going to be executed.

Autofac beta and dependencies

I created a projected that used a new version of autofac installed via nuget
Install-Package Autofac -Pre
Everything was working fine until I deployed the project to a production. And then, I saw an error:
Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly 'System.Core, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes' or one of its dependencies. The given assembly name or code
base was invalid. (Exception from HRESULT: 0x80131047)
   at Autofac.Builder.RegistrationData..ctor(Service defaultService)
   at Autofac.Builder.RegistrationBuilder`3..ctor(Service defaultService, TActiv
atorData activatorData, TRegistrationStyle style)
   at Autofac.Builder.RegistrationBuilder.ForType[TImplementer]()
   at Autofac.RegistrationExtensions.RegisterType[TImplementer](ContainerBuilder
 builder)
   at TTC.ContentHubDataCache.ContainerSetup.BuildContainer() in C:\Development\
BrandWebsites\TrafalgarTools\TTC.ContentHubDataCache\TTC.ContentHubDataCache\Con
tainerSetup.cs:line 26
   at TTC.ContentHubDataCache.UpdateDataCacheProcess..ctor() in C:\Development\B
randWebsites\TrafalgarTools\TTC.ContentHubDataCache\TTC.ContentHubDataCache\Upda
teDataCacheProcess.cs:line 58
   at TTC.ContentHubDataCache.Program.Main(String[] args) in C:\Development\Bran
dWebsites\TrafalgarTools\TTC.ContentHubDataCache\TTC.ContentHubDataCache\Program
.cs:line 9
It looked like autofac is referencing a System.Core in a really old version. A quick look at Autofac.dll dependencies in ILDASM under MANIFEST section showed that:
.assembly extern retargetable System.Core
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E )                         // |.....y.
  .ver 2:0:5:0
}
The beta version of autofac (Autofac 3.0.0-beta) is using an old System.Core, it is build against .NET in a version 4.0 but yet it is using System.Core in version 2.0, how bizarre. I uninstalled autofac in this version, and took an older one
uninstall-package autofac
Install-Package Autofac -Version 2.6.3.862
A quick check at dependencies
.assembly extern System.Core
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89 )                         // .z\V.4..
  .ver 4:0:0:0
}
Looks good, and it solved my problem, but why they used a System.Core in version 2.0.5.0 I do not know, probably they haven't noticed yet.

Monday, December 3, 2012

Measuring SQL Query execution time

It is not recommended to measure SQL execution time on a DB, wise guys believe that it is much more meaningful to run performance tests from an application, so the response time will also include a network delay, and SQL provider computation time. In my scenario, I do not have an access to an application, and there are many entry points for a one SQL Query. That is why it is optimized on a DB side. A query below displays performance metrics of a query, in my example I am interested with each query that includes 'VersionedFields' string.
select top 40 * from sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt where qt.text like '%VersionedFields%' order by last_execution_time desc
'Elapsed time' is a most important metrics for me. Much more low level way to measure a query response time is to use STATISTICS
DBCC DROPCLEANBUFFERS
SET STATISTICS IO ON 
GO
SET STATISTICS TIME ON
GO

-- SQL Query goes here like, SELECT * FROM VersionedFields

DBCC DROPCLEANBUFFERS
SET STATISTICS IO OFF
GO
SET STATISTICS TIME OFF
GO
Notes regarding DROPCLEANBUFFERS. I only run it on a test bed, not on a production.
Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache. This forces all dirty pages for the current database to be written to disk and cleans the buffers. After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool.
If one really wants to clear entire cache, a CHECKPOINT should be also used.
[CHECKPOINT] Writes all dirty pages for the current database to disk. Dirty pages are data pages that have been entered into the buffer cache and modified, but not yet written to disk. Checkpoints save time during a later recovery by creating a point at which all dirty pages are guaranteed to have been written to disk.
I do not run checkpoint that often.

Shrinking Log file in SQL Server 2008

Procedure of shrinking DB log file (transaction log file). In SQL Server Management Studio
Right click on a DB, Properties -> Options-> Recovery model -> change from 'Full' to 'Simple' -> OK
Right click on a DB -> Tasks -> Shrink -> Files ->  File type -> Log -> OK
The shrinking procedure should not take more than 3 s. It is not possible to change DB Recovery mode this way if a mirroring is setup. Now, some rules of shrinking and maintaining log file.
  • By default Recovery model is set to FULL
  • If you store in a DB crucial/important information, then recovery model should be set to FULL
  • If recovery model is set to FULL, it means that there should be a backup in place, a backup that also includes a transaction log file
  • When a backup for a transaction log file runs, the transaction log file is shrink. The truncation occurs after a Checkpoint process
  • So if your transaction log is big, like 7 GB, it means that you:
    • Don't have a backup that includes transaction log fie. And it means that you don't need FULL recovery mode
    • Your backup is not working
    • You have a big and heavily used database