Azure Storage Browser



Azure Storage Account. The first step is to create a container in your storage account (as detailed here ). Once the container has been created you need to create a CORS rule in it to allow the JavaScript running in the browser to access it. For testing purposes I set everything to. I appreciate this is a couple of years old now - but I'm coming up against this issue when downloading multiple files and zipping them. I provide SAS Urls for single file downloads - works great, but when a user wants to download, say 100 image files, I don't want to provide 100 SAS URls to the browser to download, I want to consolidate them in to a zip file.

-->

Beginning with version 2013-08-15, the Azure storage services support Cross-Origin Resource Sharing (CORS) for the Blob, Table, and Queue services. The File service supports CORS beginning with version 2015-02-21.

CORS is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. See the CORS specification for details on CORS.

You can set CORS rules individually for each of the Azure Storage services, by calling Set Blob Service Properties, Set File Service Properties, Set Queue Service Properties, and Set Table Service Properties. Once you set the CORS rules for the service, then a properly authorized request made against the service from a different domain will be evaluated to determine whether it is allowed according to the rules you have specified.

Important

CORS is not an authorization mechanism. Any request made against a storage resource when CORS is enabled must either have a valid authorization header, or must be made against a public resource.

CORS is supported for all storage account types except for general-purpose v1 or v2 storage accounts in the premium performance tier.

Understanding CORS requests

A CORS request from an origin domain may consist of two separate requests:

  • A preflight request, which queries the CORS restrictions imposed by the service. The preflight request is required unless the request method is a simple method, meaning GET, HEAD, or POST.

  • The actual request, made against the desired resource.

Preflight request

The preflight request queries the CORS restrictions that have been established for the storage service by the account owner. The web browser (or other user agent) sends an OPTIONS request that includes the request headers, method and origin domain. The storage service evaluates the intended operation based on a pre-configured set of CORS rules that specify which origin domains, request methods, and request headers may be specified on an actual request against a storage resource.

If CORS is enabled for the service and there is a CORS rule that matches the preflight request, the service responds with status code 200 (OK), and includes the required Access-Control headers in the response.

If CORS is not enabled for the service or no CORS rule matches the preflight request, the service will respond with status code 403 (Forbidden).

If the OPTIONS request doesn’t contain the required CORS headers (the Origin and Access-Control-Request-Method headers), the service will respond with status code 400 (Bad request).

Note that a preflight request is evaluated against the service (Blob, File, Queue, or Table) and not against the requested resource. The account owner must have enabled CORS as part of the account service properties in order for the request to succeed.

Actual request

Once the preflight request is accepted and the response is returned, the browser will dispatch the actual request against the storage resource. The browser will deny the actual request immediately if the preflight request is rejected.

The actual request is treated as normal request against the storage service. The presence of the Origin header indicates that the request is a CORS request and the service will check the matching CORS rules. If a match is found, the Access-Control headers are added to the response and sent back to the client. If a match is not found, the CORS Access-Control headers are not returned.

Enabling CORS for Azure Storage

CORS rules are set at the service level, so you need to enable or disable CORS for each service (Blob, File, Queue and Table) separately. By default, CORS is disabled for each service. To enable CORS, you need to set the appropriate service properties using version 2013-08-15 or later for the Blob, Queue, and Table services, or version 2015-02-21 or for the File service. You enable CORS by adding CORS rules to the service properties. For details about how to enable or disable CORS for a service and how to set CORS rules, please refer to Set Blob Service Properties, Set File Service Properties, Set Table Service Properties, and Set Queue Service Properties.

Here is a sample of a single CORS rule, specified via a Set Service Properties operation:

Each element included in the CORS rule is described below:

  • AllowedOrigins: The origin domains that are permitted to make a request against the storage service via CORS. The origin domain is the domain from which the request originates. Note that the origin must be an exact case-sensitive match with the origin that the user age sends to the service. You can also use the wildcard character '*' to allow all origin domains to make requests via CORS. In the example above, the domains http://www.contoso.com and http://www.fabrikam.com can make requests against the service using CORS.

  • AllowedMethods: The methods (HTTP request verbs) that the origin domain may use for a CORS request. In the example above, only PUT and GET requests are permitted.

  • AllowedHeaders: The request headers that the origin domain may specify on the CORS request. In the example above, all metadata headers starting with x-ms-meta-data, x-ms-meta-target, and x-ms-meta-abc are permitted. Note that the wildcard character '*' indicates that any header beginning with the specified prefix is allowed.

  • ExposedHeaders: The response headers that may be sent in the response to the CORS request and exposed by the browser to the request issuer. In the example above, the browser is instructed to expose any header beginning with x-ms-meta.

  • MaxAgeInSeconds: The maximum amount time that a browser should cache the preflight OPTIONS request.

The Azure storage services support specifying prefixed headers for both the AllowedHeaders and ExposedHeaders elements. To allow a category of headers, you can specify a common prefix to that category. For example, specifying x-ms-meta* as a prefixed header establishes a rule that will match all headers that begin with x-ms-meta.

The following limitations apply to CORS rules:

  • You can specify up to five CORS rules per storage service (Blob, File, Table, and Queue).

  • The maximum size of all CORS rules settings on the request, excluding XML tags, should not exceed 2 KiB.

  • The length of an allowed header, exposed header, or allowed origin should not exceed 256 characters.

  • Allowed headers and exposed headers may be either:

    • Literal headers, where the exact header name is provided, such as x-ms-meta-processed. A maximum of 64 literal headers may be specified on the request.
    • Prefixed headers, where a prefix of the header is provided, such as x-ms-meta-data*. Specifying a prefix in this manner allows or exposes any header that begins with the given prefix. A maximum of two prefixed headers may be specified on the request.
  • The methods (or HTTP verbs) specified in the AllowedMethods element must conform to the methods supported by Azure storage service APIs. Supported methods are DELETE, GET, HEAD, MERGE, POST, OPTIONS and PUT.

Understanding CORS rule evaluation logic

When a storage service receives a preflight or actual request, it evaluates that request based on the CORS rules you have established for the service via the appropriate Set Service Properties operation. CORS rules are evaluated in the order in which they were set in the request body of the Set Service Properties operation.

CORS rules are evaluated as follows:

  1. First, the origin domain of the request is checked against the domains listed for the AllowedOrigins element. If the origin domain is included in the list, or all domains are allowed with the wildcard character '*', then rules evaluation proceeds. If the origin domain is not included, then the request fails.

  2. Next, the method (or HTTP verb) of the request is checked against the methods listed in the AllowedMethods element. If the method is included in the list, then rules evaluation proceeds; if not, then the request fails.

  3. If the request matches a rule in its origin domain and its method, that rule is selected to process the request and no further rules are evaluated. Before the request can succeed, however, any headers specified on the request are checked against the headers listed in the AllowedHeaders element. If the headers sent do not match the allowed headers, the request fails.

Since the rules are processed in the order they are present in the request body, best practices recommend that you specify the most restrictive rules with respect to origins first in the list, so that these are evaluated first. Specify rules that are less restrictive – for example, a rule to allow all origins – at the end of the list.

Example – CORS rules evaluation

The following example shows a partial request body for an operation to set CORS rules for the storage services. See Set Blob Service Properties, Set File Service Properties, Set Queue Service Properties, and Set Table Service Properties for details on constructing the request.

Next, consider the following CORS requests:

Browser
MethodOriginRequest headersRule MatchResult
PUThttp://www.contoso.comx-ms-blob-content-typeFirst ruleSuccess
GEThttp://www.contoso.comx-ms-blob-content-typeSecond ruleSuccess
GEThttp://www.contoso.comx-ms-client-request-idSecond ruleFailure

The first request matches the first rule – the origin domain matches the allowed origins, the method matches the allowed methods, and the header matches the allowed headers – and so succeeds.

The second request does not match the first rule because the method does not match the allowed methods. It does, however, match the second rule, so it succeeds.

The third request matches the second rule in its origin domain and method, so no further rules are evaluated. However, the x-ms-client-request-id header is not allowed by the second rule, so the request fails, despite the fact that the semantics of the third rule would have allowed it to succeed.

Note

Although this example shows a less restrictive rule before a more restrictive one, in general the best practice is to list the most restrictive rules first.

Understanding how the Vary header is set

The Vary header is a standard HTTP/1.1 header consisting of a set of request header fields that advise the browser or user agent about the criteria that were selected by the server to process the request. The Vary header is mainly used for caching by proxies, browsers, and CDNs, which use it to determine how the response should be cached. For details, see the specification for the Vary header.

When the browser or another user agent caches the response from a CORS request, the origin domain is cached as the allowed origin. When a second domain issues the same request for a storage resource while the cache is active, the user agent retrieves the cached origin domain. The second domain does not match the cached domain, so the request fails when it would otherwise succeed. In certain cases, Azure Storage sets the Vary header to Origin to instruct the user agent to send the subsequent CORS request to the service when the requesting domain differs from the cached origin.

Azure Storage sets the Vary header to Origin for actual GET/HEAD requests in the following cases:

  • When the request origin exactly matches the allowed origin defined by a CORS rule. To be an exact match, the CORS rule may not include a wildcard '*' character.

  • There is no rule matching the request origin, but CORS is enabled for the storage service.

In the case where a GET/HEAD request matches a CORS rule that allows all origins, the response indicates that all origins are allowed, and the user agent cache will allow subsequent requests from any origin domain while the cache is active.

Note that for requests using methods other than GET/HEAD, the storage services will not set the Vary header, since responses to these methods are not cached by user agents.

The following table indicates how Azure storage will respond to GET/HEAD requests based on the previously mentioned cases:

Origin header present on requestCORS rule(s) specified for this serviceMatching rule exists that allows all origins (*)Matching rule exists for exact origin matchResponse includes Vary header set to OriginResponse includes Access-Control-Allowed-Origin: '*'Response includes Access-Control-Exposed-Headers
NoNoNoNoNoNoNo
NoYesNoNoYesNoNo
NoYesYesNoNoYesYes
YesNoNoNoNoNoNo
YesYesNoYesYesNoYes
YesYesNoNoYesNoNo
YesYesYesNoNoYesYes

Billing for CORS requests

Successful preflight requests are billed if you have enabled CORS for any of the storage services for your account (by calling Set Blob Service Properties, Set Queue Service Properties, Set File Service Properties, or Set Table Service Properties). To minimize charges, consider setting the MaxAgeInSeconds element in your CORS rules to a large value so that the user agent caches the request.

Unsuccessful preflight requests will not be billed.

See also

Home > Sample chapters

  • 3/11/2015
Contents×
  1. Objective 4.1: Implement Azure Storage blobs and Azure files
In this chapter from Exam Ref 70-532 Developing Microsoft Azure Solutions, you will learn how to implement each of the Azure Storage services, how to monitor them, and how to manage access. You’ll also learn how to work with Azure SQL Database.

Azure Storage and Azure SQL Database both play an important role in the Microsoft Azure Platform-as-a-Service (PaaS) strategy for storage. Azure Storage enables storage and retrieval of large amounts of unstructured data. You can store content files such as documents and media in the Blob service, use the Table service for NoSQL data, use the Queue service for reliable messages, and use the File service for Server Message Block (SMB) file share scenarios. Azure SQL Database provides classic relational database features as part of an elastic scale service.

In this chapter, you will learn how to implement each of the Azure Storage services, how to monitor them, and how to manage access. You’ll also learn how to work with Azure SQL Database.

Objectives in this chapter:

  • Objective 4.1: Implement Azure Storage blobs and Azure files
  • Objective 4.2: Implement Azure Storage tables
  • Objective 4.3: Implement Azure Storage queues
  • Objective 4.4: Manage access
  • Objective 4.5: Monitor storage
  • Objective 4.6: Implement SQL databases

Objective 4.1: Implement Azure Storage blobs and Azure files

Azure blob storage is the place to store unstructured data of many varieties. You can store images, video files, word documents, lab results, and any other binary file you can think of. In addition, Azure uses blob storage extensively. For instance, when you mount extra logical drives in an Azure virtual machine (VM), the drive image is actually stored in by the Blob service associated with an Azure blob storage account. In a blob storage account, you can have many containers. Containers are similar to folders in that you can use them to logically group your files. You can also set security on the entire container. Each blob storage account can store up to 500 terabytes of data.

All blobs can be accessed through a URL format. It looks like this:

http://<storage account name>.blob.core.windows.net/<container name>/<blob name>

The Azure File service provides an alternative to blob storage for shared storage, accessible via SMB 2.1 protocol.

Creating a container

This section explains how to create a container and upload a file to blob storage for later reading.

Creating a container (existing portal)

To create a container in the management portal, complete the following steps:

  1. Navigate to the Containers tab for your storage account in the management portal accessed via https://manage.windowsazure.com.
  2. Click Add on the command bar. If you do not yet have a container, you can click Create A Container, as shown in Figure 4-1.

    FIGURE 4-1 The option to create a container for a storage account that has no containers

  3. Give the container a name, and select Public Blob for the access rule, as shown in Figure 4-2.

  4. The URL for the container can be found in the container list, shown in Figure 4-3. You can add additional containers by clicking Add at the bottom of the page on the Containers tab.

    FIGURE 4-3 Containers tab with a list of containers and their URLs

Creating a container (Preview portal)

To create a container in the Preview portal, complete the following steps:

  1. Navigate to the management portal accessed via https://portal.azure.com.
  2. Click Browse on the command bar.
  3. Select Storage from the Filter By drop-down list.
  4. Select your storage account from the list on the Storage blade.
  5. Click the Containers box.
  6. On the Containers blade, click Add on the command bar.
  7. Enter a name for the container, and select Blob for the access type, as shown in Figure 4-4.

  8. The URL for the container can be found in the container list, as shown in Figure 4-5.

    FIGURE 4-5 Containers blade with a list of containers and URLs

Finding your account access key

To access your storage account, you need the account name that was used to build the URL to the account and the primary access key. This section covers how to find the access keys for storage accounts.

Finding your account access key (existing portal)

To find your account access key using the management portal, complete the following steps:

  1. Click the Dashboard tab for your storage account.
  2. Click Manage Keys to find the primary and secondary key for managing your account, as shown in Figure 4-6. Always use the primary key for management activities (to be discussed later in this chapter).

    FIGURE 4-6 Manage Access Keys dialog box for a storage account

Finding your account access key (Preview portal)

To find your account access key using the Preview portal, complete the following steps:

  1. Navigate to your storage account blade.
  2. Click the Keys box on the storage account blade (see Figure 4-7).

Uploading a blob

You can upload files to blob storage using many approaches, including the following:

  • Using the AzCopy tool provided by Microsoft (http://aka.ms/downloadazcopy)
  • Directly using the Storage API and writing HTTP requests
  • Using the Storage Client Library, which wraps the Storage API into a language and platform-specific library (http://msdn.microsoft.com/en-us/library/azure/dn806401.aspx)
  • Using Windows PowerShell cmdlets (http://msdn.microsoft.com/en-us/library/azure/dn806401.aspx)

To upload a blob using AzCopy, complete the following steps:

  1. Download AZCopy from http://aka.ms/downloadazcopy. Run the .msi file downloaded from this link.
  2. Open a command prompt and navigate to C:Program Files (x86)Microsoft SDKsAzureAzCopy.
  3. Create a text file in a folder that is easy to get to. Insert some random text in it.
  4. In the command window, type a command that looks like this: AzCopy /Source:c:test /Dest:https://myaccount.blob.core.windows.net/mycontainer2 /DestKey:key /Pattern:*.txt.
  5. Press Enter to issue the command to transfer the file.

Reading data

You can anonymously read blob storage content directly using a browser if public access to blobs is enabled. The URL to your blob content takes this format:

  • https://<your account name>.blob.core.windows.net/<your container name>/<your path and filename>

Reading blobs via a browser

Many storage browsing tools provide a way to view the contents of your blob containers. You can also navigate to the container using the existing management portal or the Preview portal to view the list of blobs. When you browse to the blob URL, the file is downloaded and displayed in the browser according to its content type.

Reading blobs using Visual Studio

You can also use Server Manager in Visual Studio 2013 to view the contents of your blob containers and upload or download files.

  1. Navigate to the blob storage account that you want to use.
  2. Double-click the blob storage account to open a window showing a list of blobs and providing functionality to upload or download blobs.

Changing data

You can modify the contents of a blob or delete a blob using the Storage API directly, but it is more common to do this programmatically as part of an application, for example using the Storage Client Library.

The following steps illustrate how to update a blob programmatically. Note that this example uses a block blob. The distinction between block and page blobs is discussed in “Storing data using block and page blobs” later in this chapter.

  1. Create a C# console application.
  2. In your app.config file, create a storage configuration string and entry, replacing AccountName and AccountKey with your storage account values:

  3. Use NuGet to obtain the Microsoft.WindowsAzure.Storage.dll. An easy way to do this is by using this command in the NuGet console:

  4. Create a new console application, and add the following using statements to the top of your Program.cs file:

  5. Add a reference to System.Configuration. Add the following code in the main entry point:

  6. Use CloudBlobClient to gain access to the containers and blobs in your Azure storage account. After it is created, you can set permissions to make it publicly available:

  7. Use a CreateIfNotExists method to ensure a container is there before you interact with it:

  8. To upload a file, use the FileStream object to access the stream, and then use the UploadFromFileStream method on the CloudBlockBlob class to upload the file to Azure blob storage:

  9. To list all of the blobs, use the following code:

  10. To download blobs, use the CloudBlobContainer class:

  11. To delete a blob, get a reference to the blob and call Delete():

Setting metadata on a container

Blobs and containers have metadata attached to them. There are two forms of metadata:

  • System properties metadata
  • User-defined metadata

System properties can influence how the blob behaves, while user-defined metadata is your own set of name/value pairs that your applications can use. A container has only read-only system properties, while blobs have both read-only and read-write properties.

Setting user-defined metadata

To set user-defined metadata for a container, get the container reference using GetContainerReference(), and then use the Metadata member to set values. After setting all the desired values, call SetMetadata() to persist the values, as in the following example:

Reading user-defined metadata

To read user-defined metadata for a container, get the container reference using GetContainerReference(), and then use the Metadata member to retrieve a dictionary of values and access them by key, as in the following example:

Reading system properties

To read a container’s system properties, first get a reference to the container using GetContainerReference(), and then use the Properties member to retrieve values. The following code illustrates accessing container system properties:

Storing data using block and page blobs

The Azure Blob service has two different ways of storing your data: block blobs and page blobs. Block blobs are great for streaming data sequentially, like video and other files. Page blobs are great for non-sequential reads and writes, like the VHD on a hard disk mentioned in earlier chapters.

Block blobs are blobs that are divided into blocks. Each block can be up to 4 MB. When uploading large files into a block blob, you can upload one block at a time in any order you want. You can set the final order of the block blob at the end of the upload process. For large files, you can also upload blocks in parallel. Each block will have an MD5 hash used to verify transfer. You can retransmit a particular block if there’s an issue. You can also associate blocks with a blob after upload, meaning that you can upload blocks and then assemble the block blob after the fact. Any blocks you upload that aren’t committed to a blob will be deleted after a week. Block blobs can be up to 200 GB.

Page bobs are blobs comprised of 512-byte pages. Unlike block blobs, page blob writes are done in place and are immediately committed to the file. The maximum size of a page blob is 1 terabyte. Page blobs closely mimic how hard drives behave, and in fact, Azure VMs use them for that purpose. Most of the time, you will use block blobs.

Streaming data using blobs

You can stream blobs by downloading to a stream using the DownloadToStream() API method. The advantage of this is that it avoids loading the entire blob into memory, for example before saving it to a file or returning it to a web request.

Accessing blobs securely

Secure access to blob storage implies a secure connection for data transfer and controlled access through authentication and authorization.

Azure Storage supports both HTTP and secure HTTPS requests. For data transfer security, you should always use HTTPS connections. To authorize access to content, you can authenticate in three different ways to your storage account and content:

  • Shared Key Constructed from a set of fields related to the request. Computed with a SHA-256 algorithm and encoded in Base64.
  • Shared Key Lite Similar to Shared Key, but compatible with previous versions of Azure Storage. This provides backwards compatibility with code that was written against versions prior to 19 September 2009. This allows for migration to newer versions with minimal changes.
  • Shared Access Signature Grants restricted access rights to containers and blobs. You can provide a shared access signature to users you don’t trust with your storage account key. You can give them a shared access signature that will grant them specific permissions to the resource for a specified amount of time. This is discussed in a later section.

To interact with blob storage content authenticated with the account key, you can use the Storage Client Library as illustrated in earlier sections. When you create an instance of the CloudStorageAccount using the account name and key, each call to interact with blob storage will be secured, as shown in the following code:

Implementing an async blob copy

The Blob service provides a feature for asynchronously copying blobs from a source blob to a destination blob. You can run many of these requests in parallel since the operation is asynchronous. The following scenarios are supported:

  • Copying a source blob to a destination with a different name or URI
  • Overwriting a blob with the same blob, which means copying from the same source URI and writing to the same destination URI (this overwrites the blob, replaces metadata, and removes uncommitted blocks)
  • Copy a snapshot to a base blob, for example to promote the snapshot to restore an earlier version
  • Copy a snapshot to a new location creating a new, writable blob (not a snapshot)

The copy operation is always the entire length of the blob; you can’t copy a range.

The following code illustrates a simple example for creating a blob and then copying it asynchronously to another destination blob:

Ideally, you pass state to the BeginStartCopyFromBlob() method so that you can track multiple parallel operations.

Configuring the Content Delivery Network

The Azure Content Delivery Network (CDN) distributes content across geographic regions to edge nodes across the globe. The CDN caches publicly available objects so they are available over high-bandwidth connections, close to the users, thus allowing the users to download them at much lower latency. You may be familiar with using CDNs to download popular Javascript frameworks like JQuery, Angular, and others.

By default, blobs have a seven-day time-to-live (TTL) at the CDN edge node. After that time elapses, the blob is refreshed from the storage account to the edge node. Blobs that are shared via CDN must support anonymous access.

Configuring the CDN (existing portal)

To enable the CDN for a storage account in the management portal, complete the following steps:

  1. In the management portal, click New on the navigation bar.
  2. Select App Services, CDN, Quick Create.
  3. Select the storage account that you want to add CDN support for, and click Create.
  4. Navigate to the CDN properties by selecting it from your list of CDN endpoints.
  5. To enable HTTPS support, click Enable HTTPS at the bottom of the page.
  6. To enable query string support, click Enable Query String Support at the bottom of the page.
  7. To map a custom domain to the CDN endpoint, click Manage Domains at the bottom of the page, and follow the instructions.

To access blobs via CDN, use the CDN address as follows:

http://<your CDN subdomain>.vo.msecnd.net/<your container name>/<your blob path>

If you are using HTTPS and a custom domain, address your blobs as follows:

https://<your domain>/<your container name>/<your blob path>

Configuring the CDN (Preview portal)

You currently cannot configure the CDN using the Preview portal.

Designing blob hierarchies

Blob storage has a hierarchy that involves the following aspects:

  • The storage account name, which is part of the base URI
  • The container within which you store blobs, which is also used for partitioning
  • The blob name, which can include path elements separated by a backslash (/) to create a sense of folder structure

Using a blob naming convention that resembles a directory structure provides you with additional ways to filter your blob data directly from the name. For example, to group images by their locale to support a localization effort, complete the following steps:

  1. Create a container called images.
  2. Add English bitmaps using the convention en/bmp/*, where * is the file name.
  3. Add English JPEG files using the convention en/jpg/*, where * is the file name.
  4. Add Spanish bitmaps using the convention sp/bmp/*, where * is the file name.
  5. Add Spanish JPEG files using the convention sp/jpg/*, where * is the file name.
Storage

To retrieve all images in the container, use ListBlob() in this way:

var list = images.ListBlobs(null, true, BlobListingDetails.All);

The output is the entire list of uploaded images in the container:

To filter only those with the prefix en, use this:

The output will be this:

Configuring custom domains

Download Azure Storage Browser

By default, the URL for accessing the Blob service in a storage account is https://<your account name>.blob.core.windows.net. You can map your own domain or subdomain to the Blob service for your storage account so that users can reach it using the custom domain or subdomain.

Scaling Blob storage

Blobs are partitioned by container name and blob name, which means each blob can have its own partition. Blobs, therefore, can be distributed across many servers to scale access even though they are logically grouped within a container.

Working with Azure File storage

Azure File storage provides a way for applications to share storage accessible via SMB 2.1 protocol. It is particularly useful for VMs and cloud services as a mounted share, and applications can use the File Storage API to access File storage.

Objective summary

  • A blob container has several options for access permissions. When set to Private, all access requires credentials. When set to Public Container, no credentials are required to access the container and its blobs. When set to Public Blob, only blobs can be accessed without credentials if the full URL is known.
  • To access secure containers and blobs, you can use the storage account key or a shared access signatures.
  • AzCopy is a useful utility for activities such as uploading blobs, transferring blobs from one container or storage account to another, and performing these and other activities related to blob management in scripted batch operations.
  • Block blobs allow you to upload, store, and download large blobs in blocks up to 4 MB each. The size of the blob can be up to 200 GB.
  • You can use a blob naming convention akin to folder paths to create a logical hierarchy for blobs, which is useful for query operations.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.

  1. Which of the following is not true about metadata? (Choose all that apply.)

    1. Both containers and blobs have writable system properties.
    2. Blob user-defined metadata is accessed as a key value pair.
    3. System metadata can influence how the blog is stored and accessed in Azure Storage.
    4. Only blobs have metadata; containers do not.
  2. Which of the following are valid differences between page blobs and block blobs? (Choose all that apply.)

    1. Page blobs are much faster for all operations.
    2. Block blobs allow files to be uploaded and assembled later. Blocks can be resubmitted individually.
    3. Page blobs are good for all sorts of files, like video and images.
    4. Block blobs have a max size of 200 GB. Page blobs can be 1 terabyte.
  3. What are good recommendations for securing files in Blob storage? (Choose all that apply.)

    1. Always use SSL.
    2. Keep your primary and secondary keys hidden and don’t give them out.
    3. In your application, store them someplace that isn’t embedded in client-side code that users can see.
    4. Make the container publicly available.

Azure Blob Storage Access From Browser

Page 1 of 7Next

Azure Storage Explorer Download Microsoft

This chapter is from the book

Related resources

  • By Daniel A. Seara, Francesco Milano
  • Book $39.99
  • By Daniel A. Seara, Francesco Milano
  • eBook (Watermarked) $31.99

Azure Storage Browser Software

  • By Richard Hundhausen
  • Book $39.99