Archive for the ‘Azure’ Category

Exception While uploading Block Blob : Azure blob storage invalid content .StorageClientException – The specified block list is invalid

September 27, 2012 4 comments


Windows Azure Blob is the simplest way to store text or binary data with windows azure. With the new Version of Azure, there are 2 types of Blob one is Block Blob and other is Page Blob. I suggest to go through this link before further going further in my blog .
In digest,
Block Blob –
Block blob is made of block each having unique blockID.Maximum block size is 4MB
Page Blob –
Page blob is collection of pages of size 512 bytes.

Block Blob provides stream of data. Whereas Page blob provides data in page wise.
We prefer Page blob for frequent Insert, update and read i.e. random read/ write. Whereas Block Blob is for large size where we have only read operation.

When we upload huge data into Blob, it takes too much time to upload. With the latest version of Azure SDK, where we can break the file into multiple block and start uploading parallel into ATS. Each and Every Block uploaded into Azure is associated with Block ID.

int blockSize = 4*1024;
using (MemoryStream ms = new MemoryStream(data)) // Read the Data from FileStream and copy into MemoryStream
ms.Position = 0;
// block counter
var blockId = 0;
// open file
while (ms.Position < ms.Length)
// calculate buffer size (blockSize in KB)
var bufferSize = blockSize * 1024 < ms.Length - ms.Position ? blockSize * 1024 : ms.Length - ms.Position;
var buffer = new byte[bufferSize];
// read data to buffer
ms.Read(buffer, 0, buffer.Length);
// save data to memory stream and put to storage
using (var mstream = new MemoryStream(buffer,true))
// set stream position to start
mstream.Position = 0;
// convert block id to Base64 Encoded string
var blockIdBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(blockId.ToString(CultureInfo.InvariantCulture)));
blob.PutBlock(blockIdBase64, mstream, null);
// increase block id


In the above code, we are reading chunks of data from file and upload the data to block blob with ID.Here is code for that
blob.PutBlock(blockIdBase64, mstream, null);

Once all the blocks are uploaded dont get forget about calling
Otherwise you blob is uncommitted transaction. When you run the above code, after uploading certain number of block, system throws exception “The specified block list is invalid”

@ var blockIdBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(blockId.ToString(CultureInfo.InvariantCulture)));

When I troubleshoot for hours, I got know that BlockID length are not same , If the BlockId starts from 0 then blockID will have variable length like 2 digit or 3 digit and soon, BlockID length varies. Since BlockID’s length are varying , ATS unable to upload block. To fix the Issue, Just replace

var blockIdBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(blockId.ToString(CultureInfo.InvariantCulture)));

var blockIdBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(blockId.ToString(CultureInfo.InvariantCulture).PadLeft(32, ‘0’)));

Now I see BlockIds are of Fixed Length.

I hope this will help someone who are facing same issue where is pretty tuff to identify the issue.


Categories: Azure

The context is already tracking a different entity with the same resource Uri in Azure

September 26, 2012 Leave a comment

Recently I started working on Azure. I got to usage of Azure Table Storage.Generally I hate working on Backend system where I need to have information on index, join etc , Any way we are leaving behind the relational databases and started using Windows Azure Storage which has provided some additional benefit from cost and Maintenance view. Coming back to my topic, today i will discuss about one such type of error where we are developing application using Azure Table Storage(ATS). When we try to move data either from SQL Server or other data sources into ATS, You might encounter error like “The Context is Already tracking a Different Entity with the Same Resource URI”. What are reasons for encountering these problems, after couple of hours of searching , I came up with root cause of the error. Whenever we are moving data into ATS, you should consider proper design of RowKey and Partition Key. To know more about Row Key and Partition Key .Please follow this link . In a Nutshell, you Partition Key is nothing but set of entity are of same type. You will group set of entity by a Key which partitionKey. Within each Partition Key, you have set of Row Keys. We have ensure that data of row key generated should be unique for a particular. Apart from RowKey, there is Property for each entity in ATS, which is ETAG. Etag are used to know about entities which are Updated, It has value which is compare from Client to ATS, always Etag values must be matched whenever your updating Entity in ATS.Azure Table handles concurrent updates or deletes through optimistic concurrency using an ETag value that is changed each time an entity is updated. The TableServiceContext stores the ETag of every entity it is tracking and submits this ETag in an If-Match request header when an update is requested. Azure Table rejects the update request if the submitted ETag does not match the current ETag for the entity. This comparision is done automatically by ATS, since every action is REST model.

Here are trouble shooting tips, Based on Row Key Designing for my application, I came to conclusion that there are 2 rows with same RowKey for a particular Partition Key Always ensure that You dont have 2 Entity with Same Row Key, If there is situation like than try to concatinate RowKey with Some ID @ end of RowKey , so that ATS can differentiate 2 entities.

Another reason for Getting above Error is because that your deserialized type does not exactly match the Type of the tracked entity. Check Your Base Class whether it inherits from TableServiceEntity. Its must to have Base Class of your entity to inherit from TableServiceEntity.

Another Crude method to fix issue is Disabling the Change Tracking for Context. In order to disable change tracking you use the MergeOption.NoTracking option of the MergeOption enumeration
TableServiceContext tableServiceContext = cloudTableClient.GetDataServiceContext();
tableServiceContext.MergeOption = MergeOption.NoTracking;
and then tableServiceContext.AttachTo(tableName, entityName, “*”); where * means the force update of entity to the context using DataServiceContext.AttachTo() with an ETag of *.

Using above 3 trouble shooting approaches for fixing the error.


Categories: Azure Tags:
%d bloggers like this: