Quantcast
Channel: Bas Lijten – Bas Lijten
Viewing all 54 articles
Browse latest View live

Another look at URL redirects in Sitecore

$
0
0

Redirection of urls, it’s a very common action, it’s important to maintain your SEO-value when URL’s move around and to provide friendly, short URLs. The only thing that you have to do is to create a permanent or temporary redirect, right? There are some solutions which add redirect functionality to Sitecore, for example the great Url Rewrite module by Andy Cohen, which is based on the IIS Url Rewrite 2.0 module by Microsoft. But there are several scenario’s when you can solve several redirects in other parts of the infrastructure, or with other products. This may, for example, be the case in in larger companies, hosting multiple Sitecore instances with multiple sites, where configuring certain types of redirects in different parts of the infrastructure can prevent a lot of other configuration in those same layers, reduce complexity or prevent issues on the permissions to configure redirects.

This blogpost explains why we chose to handle redirects in different parts of our infrastructure, from a technical and a functional perspective.

A small look at the infrastructure and the way it gets managed

If we look into a typical infrastructure to serve websites, this could be visualized as follows. For every website that needs to be accessed via the internet, configuration is needed (not on every layer of the infrastructure, but on several layers). In small organizations, some layers are not present, this may be done by a single team, or even a single person, but in larger organizations, those tasks are usually split amongst several teams, with different workloads, procedures, schedules and priorities. Those tasks vary from configuring DNS, Virtual IP’s, create/order new certificates for connections over SSL, to block incoming requests to specific paths/locations. All these layers are in place to keep the entire infrastructure as safe as possible. Imagine what happens when you are hosting not a single site, but tens, or even hundreds of sites. When those sites should all be accessible over http/https and the sites should be available via a www- and non-www entry, the workload of all the teams grows and with a lot of sites, the traffic will grow as well. Think about what happens to the infrastructure when every request would reach the webservers: every layer would get a lot of traffic. Those are just a few reasons which can contribute to the choice of aborting/redirecting requests as early as possible in the chain.

Note: an example of a path that should be blocked over the internet (thus, the Content Delivery servers), are the /Sitecore/ and /App_config paths. They should be blocked for anonymous users on IIS as well, but defense in depth (multiple layers of security) is always a good practice.

Redirect requirements

When only taking into account the technical aspects of a solution, all the redirects would be handled by the Reverse proxy. This wouldn’t be a suitable solution for the content editors, as they have the need to create new redirects on the fly. In our situation they don’t have access to the reverse proxy, so the only possibility that they would have, was to provide a task to the team which administrates the reverse proxy, with a request to configure those redirects. That would create serious delay in providing new redirects. Imagine what would happen when they make some errors. That would take ages ;).

That’s why we split the redirects into 4 categories:

Redirect all http requests to https

Serving sites over http is not safe for the end user. Google is even using https as a ranking signal; thus boosting sites that use https. This kind of redirect should be handled as early as possible in the infrastructure and it’s a one-time configuration which has become part part of our standard operation procedures when configuring new sites. This configuration takes place on the reverse proxy.

Redirecting early in the chain, prevents a lot of configuration that was needed on other layers otherwise. Think about configuring firewalls, load balancers and webservers, they all should support the http and https traffic, the www- and non-www url’s. At Sitecore level, extra configuration should be needed to serve both sites and it would introduce complex scenario’s when posting data over the internet. When do you choose to use http, when to use https. What would it mean for developers? What kind of security flaws would you introduce?

Note: in the end you want to get rid of non-encrypted traffic. By implementing Http Strict Transport Security, it’s possible to protect against downgrade attacks and it’s easier to protect against cookie hijacking. Read this excellent article by Troy hunt on this technique.

Redirect www to non-www, or vice-versa

All of our sites should be accessible via www- or via the non-www variant, this is a single configuration that should happen once during a lifecycle for a domain. Customers tend to access sites as “domain.com” in the browser, which lacks the “https” protocol and the “www” subdomain. As for the http/https redirect, the same reasons apply: it prevents a lot of configuration. An extra benefit is that it’s easier to configure and maintain canonical urls for your content if you just have one endpoint for one site on the IIS-servers, which is very important for the SEO-value of your site.

Note: the http/https and www/non-www redirects could be included in one redirection rule: this might prevent one additional redirect in the process of redirecting.

Complex redirects

A complex redirect is a redirect which uses regular expressions or other conditions; those kind of redirects are not configured often, but often happen during migrations, after rebuilding, or after restructuring a site. As those regular expressions should be tested thoroughly (preferably over the DTAP), as they can break the sites, we prefer that content managers are not able to configure them.

An example of redirecting urls using regular expressions is the following. An old asp.net site was presenting articles using the following url: http://www.domain.com/article.aspx?id=2&title=some-cool-title. When migrating to Sitecore, a more friendly url would be: https://www.domain.com/article/2/some-cool-title. This can be achieved by using regular expressions. These are the places where these regular expressions can be configured:

  • Solve the complex redirects in the Reverse proxy
  • Solve the complex redirects in Sitecore
  • Solve the complex redirects in IIS using the URL Rewrite 2.0 module

Solve complex redirects in the reverse proxy

When it would be possible to configure those redirects by the build-team, this would be a superb solution, but often, this is not possible. This means:

  • larger throughput time,
  • changes for DTAP would take a lot of time
  • very application specific changes
  • Often the reverse proxies are not configured in such a way, that they can redirect the paths

This is, on our case, not really a viable option.

Solve the complex redirects in Sitecore

Solving complex redirects in Sitecore could be an option. It causes pretty much and complex, tailor-made code, while the same solution could be implemented using out of the box IIS functionality. The URL Rewrite module by Andy could help here, it offers some of the functionality as the IIS URL Rewrite 2.0 Module, but it has, for us, some drawbacks:

  • It’s custom code. IIS and the reverse proxy offer functionality that is, we assume, thoroughly tested. Is it optimized for performance?
  • Although the code has a high quality level, it causes a dependency on an open source project that is not very actively maintained. (if we decide that we are going to use it, we will actively contribute to the project)
  • It doesn’t provide testing of patterns, which the IIS URL Rewrite 2.0 module does offer. (however it does have a neat page to test url’s, to see what rules are being hit).
  • More complexity on denying access to content editors for certain redirect rules.

Although this option could work in other scenario’s, for other customers, we didn’t choose for it (yet).

Solve the complex redirects in IIS using the URL Rewrite 2.0 module

This module is an officially supported module and created by Microsoft. It even fits in a cloud scenario, as the azure web applications can make use of this module as well. However, not all the capabilities of this module are supported: It’s possible to write a custom provider for this module. This means that this module can be pointed to other sources, for example databases, files or even Sitecore, to get it’s configuration. Those providers need to be deployed to the Global Assembly Cache, which isn’t possible on Azure.

An advantage of the redirect module is that it has an complete interface with a lot of options. Complex (very complex) rules can be configured and it’s possible to include conditions, for example to match the HTTP_HOST server to the hostname of the site. Another advcantage of the URL Rewrite 2.0 module is that it will kick in before the Sitecore pipeline, so no additional processing power is needed and no additional database queries are made when a request meets the rule conditions.

The output of testing the pattern results in the following screen:

These results can be used to concatenate the new url (article/{R:1}/{R:2}

Adding configuration changes can be done easily by editing the web.config. In a later blogpost, I will explain on how to do this easily using web.config transormations during deployments; no manual changes will be necessary, they look as follows and can easily be templated:

This means that, the condition match for the hostname, the regular expression and the action type easily can be injected using MRM tokenization or the facilities in octopus-deploy, thus, deployment-time. This will help in the continuous delivery or even continuous deployment process when having one of those mechanisms in place.

As administrators have access to those environments and changes can be automated using the web.config transformations, this seems to be a very viable solution: preventing content editors to make complex changes, while having the needed velocity to make changes.

Simple redirects

The last category is “simple redirects”. An example is redirecting https://www.domain.com/actionable-url to https://www.domain.com/path/to/page, thus an incoming path to any other URL, internal or external. The content editor must be able to set the friendly url to redirect it to an existing page. Sitecore offers the “alias” feature, but this feature doesn’t cut it: it doesn’t redirect but rewrite and it’s only viable when working with single sites and it doesn’t work with external url’s. Aliases with the same name for different sites cannot be configured. This is the part where some tailor-made code will do it’s job. There are two options in this one:

  • Writing an IIS URL Rewrite 2.0 provider which interacts with Sitecore
  • Write a custom pipeline for Sitecore

Writing an IIS URL Rewrite 2.0 provider which interacts with Sitecore

As stated before, this provider does not work in Azure, as it has to be deployed in the Global Assembly Cache. A great advantage could be that Sitecore can act as a provider. Content editors would be able to configure simple redirects, which are handled by the provider. My gut feeling is that Microsoft doesn’t actively put any effort in this provider-pattern anymore, as it’s hard to find the SDK’s for these providers.

Writing a custom pipeline for Sitecore

This solution can be very viable. Content editors would only have to configure the from-path and the to-url, (exactly the same as in the IIS URL Rewrite 2.0 provider), but the redirects would be handled by a custom pipeline extension. A simple caching mechanism could be used here, to improve performance.

This module should handle the real Sitecore scenario’s: multi-language, configurable for multi-site solutions, easy access for editors, and reporting on redirects using the analytics database.

When a plan comes together

When those 4 kind of redirects are being combined, the following scenario’s are handled as followed:

http://domain.com/action-path -> https://www.domain.com/the/real/path/for/sitecore

  1. Reverse proxy redirects http://domain.com/action-path to https://www.domain.com/action-path using a 301-redirect
  2. IIS Url Rewrite – no regular expression has been configured, so this url doesn’t match, no redirect takes place
  3. The path /action-path is being redirected to /the/real/path/for/sitecore , so https://www.domain.com/action-path is being redirected to https://www.domain.com/the/real/path/for/sitecore using a 301 redirect.

http://domain.com/article.aspx?id=2&title=some-cool-title -> https://www.domain.com/articles/2/some-cool-title

  1. Reverse proxy redirects http://domain.com/article.aspx?id=2&title=some-cool-title to https://www.domain.com/article.aspx?id=2&title=some-cool-title using a 301-redirect
  2. The IIS URL-rewrite module kicks in: the article.aspx?id=2&title=some-cool-title matches the regular expression “^article.aspx\?id=([0-9]+)&title=([_0-9a-z-]+)” and 301-redirects this page to /article/2/some-cool-title
  3. This page is available in Sitecore and is being served. This page could have been redirected on it’s turn by a content editor to another location.

As seen in the two scenario’s above, most of the time, the amount of redirects is limited to two. Theoretically, there could be a 3rd redirect, but this shouldn’t happen too often. Another advantage of this mechanism, is that paths, which were coming from the different https/http and www/non-www configurations, can be handled in a single configuration item, instead of 4 different mappings.

Using these 4 kind of redirects, all page requests that will reach the IIS-webservers, are having the https://www.domain.com structure, which greatly reduces complexity. All redirects that are more application specific, can be handled by IIS. Complex and simple redirect only have to be configured for a single URL, instead of 4:

As seen in the picture above, a single domain on https has to be configured throughout the food-chain, while the application specific redirects can be handled by IIS

Conclusion

When taking another approach on redirects, and by configuring redirects on other parts in the infrastructure, this can greatly reduce complexity, traffic throughput and it might increase security. This can reduce the need for custom code in your Sitecore modules and greatly reduce the complexity of this code. Sitecore is not the centre of the Universe. (well, not always ;))


Improving the Sitecore Logging and diagnostics experience™ part 1: expose more information using a new Logger

$
0
0

Lately, I have been working on improving he Sitecore Logging Experience™. Sitecore uses the log4net framework to handle all kinds of logs, but, with the standard configuration and implementation, we’re not making the most out of it.  Where Alex Shyba wrote some excellent posts on writing your logs to SQL to make the logs easier accessible, I am going to take the logging capabilities to the next level! In this blogpost I will describe why the out of the box Sitecore logging implementation should be improved, how to do this and eventually I’ll show how to modify the appenders to show some extra trace information. This is all a step-up to my next blogpost, I will explain how all the Sitecore logs can be send to application insights on azure to get even better insights in your application usage!

The out of the box logging experience with Log4Net

The Sitecore logging mechanism is based on an old version of Apache’s Log4Net, which is a very flexible and pluggable logging provider. The log4net library has been included in Sitecore as in, delivered in an own assembly, called Sitecore.Logging. While there is a default log4net logging façade (in the Sitecore.Logging assembly, namespace log4net.spi.LogImpl), Sitecore provided a specific Sitecore logging façade as well. It’s available via the Sitecore.Kernel assembly, in the Sitecore.Diagnostics namespace. When referencing this library, a log can be written by simply entering the line “Log.Info(“message”, this)”. This rule logs the “message” to the specified logger, if available. The specified logger is being specified by “this”, and represents the type of the current class. This page has an explanation on how the log4net can be configured.

Because the log4net implementation is a) outdated and b) being hosted in a Sitecore assembly, it’s not possible to easily use 3rd party solutions for log4net with Sitecore. The 3rd party solutions generally use the newer implementations of the LogEventInformation class (which has been altered over time) and they can’t find the log4net assembly, because it isn’t there.

Log4Net Loggers

A logger is a piece of configuration in which the name of the logger is specified and contains some configuration options:

  • Name: name of the logger. When logging, by specifying the name of the logger or specifiying the type (remember, this?), Log4net checks out if this logger is available. (more info on the documentation page ;))
  • Level- What is the minimum Logging Level that should be handled. For example, if the Level INFO has been specified, all messages of level DEBUG are ignored
  • Appender: target where the logs should be stored. This can be anything, appenders are separately configured

Appenders

In the /Sitecore/log4net/ configuration section, these “appenders” can be found, this is a target to which log4net can write its information. The Sitecore out of the box appender is the SitecoreLogFileAppender, which writes its logs to the /data/logs/log-{date}.txt files. Other examples of appenders are:

  • ADONetAppender
  • ASPNetTraceAppender
  • EventlogAppender

Custom Appenders can be created as well , but that’s not part of this blogpost. More on that in part II – Sending logs to Microsoft Application Insights on Azure. As part of the configuration, Log4net conversion patterns can be used to determine which data will be logged, the out of the box setting is as follows,

“%4t %d{ABSOLUTE} %-5p %m%n”

which results in writing the following logs:

According to the documentation these conversion pattern names mean “ThreadID”, “date”, “Level” and “Message”. This message can be useful for hunting and solving bugs.. As shown in the picture above, I included information about the controller, the classes that were instantiated and the methods that have been used, but this can cause a lot of rework, especially when refactoring your application. It can even be confusing when the logging information is not being updated when the code is being refactored, or when the logging information has some typo’s. According to the documentation, this shouldn’t be needed, as there are a lot more conversion patterns can be used for diagnostics. A lot of people use classnames and methods in their logs, but this information is already part of the log information: “%M and %C” can be used to include this information in the logging message. When changing the config to use this pattern, the following logging will be written:

For every logentry, the type “Sitecore.Diagnostics.Log” and Method “Info” are logged, while I expected that my custom controller and classes would appear in these logs. Something is wrong, but what? And more important: how can this be fixed?

How can this be fixed?

The fix to include the right information can be fairly easy. The Sitecore logging implementation for log4net is a façade that is encapsulating the default log4net façade. Basically, it comes down to the following pattern:

The above example is very, very simplified, but it gets to the point: Our custom class is calling the Sitecore logging implementation, which, on it’s turn, is calling the Log4Net façade. This Log4Net façade executes the function “Assembly.GetCallingAssembly()” method, which returns “Sitecore.Diagnostics.Log”.

The trick is to write an implementation which looks like the Log4Net façade: a Sitecore.Diagnostics.Log which calls the “Assembly.GetCallingAssembly()” and passes that information into the Log method of the ILogger:

When using this implementation, together with the proposed conversion patterns, the logs look like as follows:

Now the correct calling assembly and the correct calling method are being logged, which enables the developers to use shorter, meaningful messages in their code and the logs give much more insights on the the code that was executed. The good thing is: Sitecore can fix this and this fix can be backwards compatible with the current logging experience, while the exposed logging can greatly be improved! It would even be possible to upgrade to the current version of log4net!

On my github page I created a new SitecoreDiagnostics repository with a new Logging implementation. I didn’t test this on production, but it’s an example on how easy this stuff can be fixed. This repository includes a log4net implementation which is able to write the correct information, as well the application insights appender to submit those logs to the Azure Service.

Summary

The default Sitecore implementation of log4net is not optimal, the implementation is outdated and doesn’t support 3rd party add-ons. It’s not hard to improve the current implementation and building a custom logger façade to use the Sitecore log4net implementation and to be able to write into the standard sitecore logs, is quite easy and opens up possibilities. One of these possibilities is append the information to application insights and take the insights to the next level, as every logged item, your custom logs, as well as the Sitecore logs, would can be correlated and be traced See below for a sneak preview. In this overview, we see custom logging, the calling class, method, line number and all other events that have been logged for this request. Even some of the stored procedures for this request are visible in this image, how cool is that?

Improving the Sitecore logging and diagnostics Experience™ part 2: logging to Application Insights

$
0
0

In my previous blogpost I wrote on improving the Sitecore logs, which was a prerequisite for this blogpost, to send all that logging information to Application Insights. This blogpost will explain the steps on how to do this. Application Insights is a tool, hosted on Azure, which helps to get a 360-view on your application. It tracks application health, adoption metrics and crash data. With several interactive tools it’s possible to filter out, segment data and drill down into event instances for details. With a few clicks, it’s possible to view the whole logged call-stack of your application. In this blogpost, I will explain how to send your logs to Application Insights. The great thing is: The is not limited to your custom logs, but the full stack of logs, thus custom and Sitecore logs, will show up in this tool. This platform is not limited to Microsoft, there are a lot of SDK’s available for other technologies.

All source code can be found on my SitecoreDiagnostics repository on github.

How can Application Insights help?

When sending all the data to Application Insights, the service can help to get insights in page requests, views, exceptions and other metrics. What is happening, which dependencies are there, what kind of SQL statements are executed, et cetera. Not only your custom logging is shown, but the Sitecore dependencies as well. In the paragraphs below, I will explain one usecase extensively and will show others by just displaying some cool graphs. In the end, all data can be used in Microsoft Power BI as well! I will explain one use case on narrowing down to some server side events. Other overviews that can be created are overviews of all exceptions, failed requests, page views, user sessions, active users. When set up correctly, the performance counters of a server can be send to Application Insights as well.

Manual tracing

Open up Application Insights, the default view is looking as follows:

Selecting each of the graphs will result in a different follow up-screen. I selected the first graph, as I was interested in the server response times. I added the diagnostic timeline myself, it shows a lot of exceptions: this is due to mongoDB that crashed at my local machine, which is generating a lot of exceptions.

The next screen that opens up after selecting the server response time, is the Server responses page. This is a preconfigured page with diagnostic information on Server response times, the dependency durations and the amount of server requests completed. It’s possible to easy filter a smaller timezone on these graphs, by selecting a timerange and press on the magnifier glass:

After pressing the buttom, all events are filtered. Every row can be selected to drill down to even more information. In my case, I want to know why the Sitecore/Index request is taking a lot of time to execute. The average call takes 2.3 seconds, with a standard deviation of 4.64 seconds!

Some requests take 4 seconds or more to load, while others are being executed in 100-200 milliseconds.  Let’s narrow down to the request that takes 5.8 seconds. A lot of information is being shown. I removed a lot of remote dependencies (all SQL calls), those are automatically measured, and if there is a stored procedure, this one is logged as well. WebAPI dependencies and WCF calls are measured automatically as well!

This overview gives the ability to drill down even further. When selecting “All available telemetry”, all related events, traces, dependencies and exceptions are being shown. And when I say all, I really mean “all”. All related Sitecore generated logs are appearing as well in this overview:

This telemetry can be narrowed down even further, to get all meta information that is available for this log. Remember my previous blogpost, where I explained on how to log the calling class and method to the Sitecore logs? The reason is below, this information (and even more) appears in the meta overview of this call! Classnames, methods, identities, linenumbers, even the sourcecode files is showing up in this overview! And with just a bit of extra work, calling Site, environment or other data can be exposed. In the case below, I selected a custom log, that was created by my own code.

When selecting a Sitecore generated log event, other information is showing up:

Notice the Classname and methodname? They are always defaulting to the Sitecore.Diagnostics.Log. If another telemetry event is selected, for example a telemetry dependency, other information is showing up. In this case, even the SQL command (in this case the stored procedure on the core database).

Application Insights search in visual studio

From the visual studio interface, it’s possible to search through the logging as well:

Using and Configuring Application Insights

In a normal scenario, application insights can be configured when creating a new web application, it’s part of the new project modal dialog. This will create a new project, add a new Application Insights resource to azure, configure the instrumentationkey, will configure the web.config and add some javascript to the _Layout.cshtml to make sure that every pageview will be tracked. These actions will be correlated to the events that are executed serverside. The Trace API can be used directly to write to Application Insights, but the logging framework of choice can be used as well.

Configuring Application Insights for Sitecore

In our case, the Sitecore case, configuring Application Insights is a bit harder. You can follow above practice and just throw away anything that you don’t need, or start off with an empty web application, without the Application Insights instrumentation. In this case, you need to do the following:

  • Create a new application insights resource
  • Add the following nuget package to your solution:
    • ApplicationInsights.Web

This will add some javascript files, make sure that your layout page references these javascriptfiles and make sure to execute the javascript that is provided by application insights on every single page. The custom javascript that needs to be run on your page can be found here:

The web.config has to be modified as well, as some http modules have to be added. I chose to use Xml document transformations for it, to be able to do this in a repetitive way on different environments:

The InstrumentationKey is a key to identify the application and is generated by Application Insights. In normal scenarios, this key can be found in the applicationinsights.config, but I decided to make this one configurable in the Sitecore settings. The patchfile has to be altered for this one.

The last part is adding the Application Insights appender for log4net nuget package. And that’s the part where things go wrong: this package doesn’t work, because of two reasons:

  1. The Sitecore log4net implementation is an outdated implementation. The logEventInformation class, for example, isn’t compatible anymore with the AI-Appender.
  2. The appender looks for the log4net assembly. As Sitecore decided to host it’s own version, this one cannot be found. Adding this log4net assembly will not work, as it won’t pick up configuration and it wouldn’t be possible to send Sitecore data to Application Insights.

That’s why I added a custom ApplicationInsightsAppender in the project on github, which can handle the outdated implementation. A downside is, that I had to use some reflection to be able to send Exceptions to the service.

Last, but not least, the custom appender has to be added to the log4net configuration. I decided to only add the appender to the root-logger, all the others will not log to Application Insights. Changes can be made in the patch-file.

I created a Sitecore specific variant and placed it in this repository. Compile it and deploy the .Web project to the environment and you would be good to go (apart from the web.config changes). The test project contains a layout page and a controller rendering with both the javascript and some logging on it, which can be used to generate some test-data.

Summary

Application Insights can be a really good addition to your toolset to analyze your Sitecore webapplication. Not only gives it insights in your own application, but it’s possible to trace down to the Sitecore logs as well!

Sitecore Security #1: How to replace the password hashing algorithm

$
0
0

Let’s face it: It’s a business nowadays to hack sites, retrieve personal information and sell them on the black markets, think of usernames, passwords, credit card details and-so-on. Often, this data is stolen using SQL injection attacks, which may be possible to your Sitecore site as well, thus, it’s better to be safe than sorry. As Sitecore ships with an old hashing algorithm to handle Sitecore users login, it’s time to replace the hashing algorithm as well. When having a fresh installation, this isn’t much of an issue, but for existing installations, you will face the challenge on upgrading your existing users, because the password hashing algorithm will be changed. This blogpost will show how to upgrade the hashing algorithm, describe those challenges, and tell you how to increase your Sitecore security.

Find the sources on https://github.com/BasLijten/SitecoreDefaultMembershipProvider for use on your own Sitecore environment!

The default Membership configuration

The default SQL Membership provider is configured out of the box as follows:

Sitecore is configured to use the provider “Sitecore”, which redirects to the “real provider” called “sql”, which is, in fact, the old membership provider from 2008.

When we look closely at the configuration, we notice a few things:

  • Hashing algorithm SHA1 is used (which was standard back in those days)
  • The default password policy is pretty weak

The hashing algorithm

Hashing is a form of obfuscating a value that is irreversible, which makes it impossible to get the original value of this hash without brute forcing it, which basically means trying to hash every possible value until you get a value which is equal to the original hash. The longer the hashing function takes to execute, the safer it is to save passwords with this hash function.

Changing the hashing algorithm

To use an alternative kind of all that is needed, is to replace the “hashAlgorithmType” to another, supported hashing algorithm. From a security perspective, there are few recommended algorithms, like for example “bcrypt” and “PBKDF2”, but they are not part of the Microsoft .Net framework. The SQLMembership provider that is being in use by Sitecore, only supports SHA1 and SHA256. Sitecore uses the SHA1 version, probably because of backwards compatibility issues.

Option 1: Changing the membership providers

First,  the possibility to change the Sqlmembership provider into a newer one was explored; the DefaultMembershipProvider or the SimpleMembershipprovider. Although it’s not too much work to change this membership provider, this is not recommended

  • The new membershipproviders require entity framework to run. This requires a change to the connectionstrings.config and the deployment of this framework, which might conflict which your current assemblies
  • Those providers use new tables in the core database and a migration of the old tables to the new tables. This may cause issues when upgrading to a new Sitecore version
  • The DefaultMembershipProvider does not support bcrypt or PKBDF2
  • The SimpleMembershipProvider uses PKBDF2 (with 1000 iterations), but it may be possible that this one does not fully work with Sitecore (didn’t test this out)

Option 2: Adding new hashing algorithms to Sitecore

Another option was to deploy the new hashing algorithms to Sitecore. It turned out that this was the most secure and easy way to increase the Security level.

The company “Zetetic” created the .net versions of these algorithms which can be used within your application. Before this algorithm can be used, these have to be registered first. I chose to do this in the initialization pipeline of Sitecore with just a single line of code:

This line of code will make the hashing algorithm available for use in the membership configuration section of Sitecore. Change the “SHA1” hashAlgorithmType to “pbkdf2_local” and you’re good to go. Except for one small issue: due to the change of the algorithm, no one will be able to login anymore.

Resetting the admin password

Step one would be to reset the administrator password. I create a small console app to do this, which can be found on github; it contains just a few lines of code:

If your installation is brand new, you are good to go, otherwise, the next step would be to reset the passwords for all other users as well. As the hashing values are irreversible, the only option would be to reset the passwords of all of your users, so that they would have choose a new one. Another option is to create a custom Membership provider to gradually change the passwords when users logon. Kam Figy wrote a blogpost on this subject as well and has created a prototype membershipprovider.

Increasing the password policy

Another part is to increase the password policy, as the out of the box is pretty weak. This page describes the parameters that can be configured. I would recommend to set at least the following parameters:

  • maxInvalidPasswordAttempts:             5
  • passwordAttemptWindow:             10
  • minRequiredPasswordLength             12
  • minRequiredNonAlphanumericCharacters 2

 

A better approach would be to use the following regular expression (passwordStrengthRegularExpression) which forces the passwords to have at least an upper case, lower case, one number and a special sign in your password:

/^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.[\W]).{8,}$/

Summary

Increasing the password security is not hard and I really suggest that you will do this right NOW! Although it might bring some inconvenience for your customers, you will really increase your security policy by a lot. If you don’t believe me? Watch this short youtube movie:

Realtime personalization monitoring with Sitecore and google analytics

$
0
0

Some of our bigger sites, which don’t run on Sitecore yet, use google analytics to realtime monitor events that happen on a website, think about forms that are submitted and personalizations that are shown to a specific user. Most of the time, external (javascript) tooling is used to inject those personalizations and an event needs to be implemented which will be send to google analytics to register that event. In Sitecore, we can implement those google analytics events by including a javascript in our razor views, but, how can we tell whether or not the component that was shown was part of a personalization flow? Was a custom datasource selected, was the completed component rendered as a personalization? This blogpost series learns you on how to determine what kind of personalizations where exposed to a user and how to tell external systems about those events. It turned out that a (beautiful) pattern can be used that Sitecore itself already introduced themselves a while ago.

All sourcecode can be found here on github

The use case – sending real-time events to google

A user visits a site for the first and sees a few blocks of information on the site. As it’s the users first time, no profile has been built yet and all the default renderings are shown to the user. One of these blocks is a call out for a travel insurance. The user appears to be the owner of a small company and navigates through the “business” area of the site. Due to his behavior, the user gets the “Business user” profile applied. At the moment that he returns to the front page, the “travel insurance” callout gets replaced by a “Business travel insurance” call out, nothing special so far. Our marketers expect that for every personalization, a custom event will be send to google analytics with the name of that personalization, that tells google analytics (or any other analytics tool) that a personalization event has been taken place. And this is exactly the part where things become harder.

The Challenge

What’s so hard then? The requirement to only send an event when a personalization has taken place is the hard part. A developer could include for every view that he creates the JavaScript to send an event to google analytics:

ga(‘send’, ‘event’, ‘Personalization’, ‘Name of personalization’);

But how could you possibly know if a component was injected as a conditional rendering? Or that only the datasource was personalized? What was the name of the Rule that was applied? What if the developer forgot to inject the JavaScript? Or what if the rendering wasn’t a personalized rendering? The short answer is: This is not possible (although, not in an easy, scalable way) and chances are big that if it was an easy solution, every razor view would contain logic to a) determine if the JavaScript should be injected and b) inject the JavaScript if this was the case.

Analysis of the challenge

As stated before, there are two major issues:

  • When and how do we know what personalizations have been applied for each component?
  • How can we automatically notify google analytics with the personalization events?

Aside from those two questions, there were a few additional requirements:

  • The solution has to be easily maintainable by a developer
  • The solution has to work in a multi-site setup
  • The solution may not mess up the HTML – we have to come as close to the html as the designers deliver to us.
  • No overhead for our content-editors or marketers – a new personalization should cause any effort

What personalization is applied?

The most important part of the solution, is to find out what personalization are applied and there are a few ways to find out how this is done.

The rendering pipeline

The first solution that I came up, was to intercept the rendering pipeline. Sitecore runs a pipeline for each rendering, which executes a few different steps:

The first part of this getRenderer pipeline is clear: this the part where the customizations are determined. Under the hood, it runs the mvc.customizeRenderingPipeline:

The Personalize processor is where the magic happens: the rules are determined and executed, to find out what rendering should be rendered. A great place to find out what rule was applied. But how could we insert JavaScript to the response, to send the events to google analytics?

As the personalize processor only applies possible the actions that are tied to the rules, for example setting a different datasource or setting a completely different rendering, it does not set any information on what rule was applied. After the Personalize processor, the customization pipeline directly falls back to the GetRenderer Pipeline and tries to render the rendering that was set by the personalization step.

This means, that we cannot directly find out what personalization was applied from the rendering step. Introducing a new processor in this stage, run all the personalization rules again, selecting the name of the applied rule and inject some JavaScript before or after the rendering would get complex and it would mess up the clean HTML structures that we wanted to use.

It became clear that this wasn’t the most beautiful solution to solve the challenge.

Using Analytics

Browsing around in the interactions section of the Xdb set me into the right direction. It appeared that all personalizations are stored into the Xdb!

For every interaction is stored what page was visited and what rules were exposed to the visitor. It appears that the RuleId is the ID of the that was applied on the rendering. Too bad it doesn’t have any information on the rulename or rendering stored as well. This information is also available via the API (late in the pipeline lifecycle, that is! This means that this information is not yet available during the getRenderer pipeline which sounds, logical to me. It sounds a bit like the Chicken Egg paradox), via the following code:

The personalization property contains information on both RuleSet and RuleId. But the meaningful rulename (the information that our marketers are interested in) is not accessible directly via the API.

Getting the meaningful rule names

As you probably saw in the picture above, the RuleId is stored as a Guid. As “Sitecore is build on Sitecore”, the expectation was that the personalizations were stored somewhere in the Sitecore databases, but it turned out that was not the case. A deep dive into the personalization pipeline learned me that those rulesets are stored on the item itself. Below is a snippet with an example of it.

Parsing the xml directly was a consideration, but, this format can change over time and it’s unsure whether or not this xml contains all the information? And what about global rules? Will Sitecore introduce them again? A gut feeling that this wasn’t the most stable solution for the near future. All that was left to do, was to iterate through all the renderings, find the rules on these renderings and correlate them to the correct name of the personalization rule. I also introduced some logic to ignore the default renderings, as they aren’t really real personalizations.

The information on what RuleId was exposed to the visitor is not yet available during the insertRenderings or GetRenderer pipeline, so another moment in time has to be found to use this information.

Create a new processor in the mvc.requestEnd pipeline

This kind of modification belongs in the mvc.requestEnd pipeline. At this moment, all renderings have been customized, inserted, all personalization data is available and it’s the point to before the HTML is being returned to the browser. The perfect place to inject some HTML. But how could the HTML be injected into the Response of the current context?

The answer lies in the HttpContext.Response.Filter. The output of the response is piped through this Filter (if any available) which allows the output to be modified. Sitecore itself uses this pattern to inject, for example, the Experience Explorer Control and the DeviceSimulation controls. A small look at the code put me into the right direction: Sitecore already created a neat PageExtenderResponseFilter class which allows to manipulate the page output. I copied that one and modified it to run a new pipeline, the RenderPageExtender pipeline, in which I put a processor to render the personalization processor (in this case, the google analytics variant). In this processor, the logic is stored to find the personalization names and to render this to the HTML that will be returned to the client. This mechanism can be used to inject other kinds of HTML as well!

Making it usable for developers

The most parts of the puzzle are on its place now, the last part is to make this available in a multi-site solution and to create a solution that causes the least amount of work for developers to inject HTML into the page. For the multi-site solution, I chose to make the pipeline configurable by Site parameters: if the site name of the current context is not specified as a filter, the processor won’t run at all.

The solution to inject HTML was a bit more complex. The first solution was to create a processor per site and insert the JavaScript via .Net code, but that one is quite error prone, with all the escaping characters. I decided that I wanted to make use of razor views and dynamically load them and render the output to the response filter. This solution has a few advantages: it’s easier to make reuse of the processor and customizations can be added very fast, without to much of a hasle. And this solution was also already available in Sitecore: Ever wondered why the ExperienceExplorerView.cshtml was in the /Views/Shared folder? Exactly for this reason, as it’s dynamically inserted as well.

The Experience Explorer view has some information on what rule was applied for the 
current user as well. This solution executes all rules for a 2nd time after the 
page was rendered, I choose not to do this for performance reasons.

The code below shows how a Controller dynamically can be created, inserted and rendered. The partialName at line 1 comes from the processor parameters as well: this makes it possible to specify a separate razor view per site.

This is the razor view, that easily can be maintained and created by any developer:

Summary

Sending real-time personalization events is possible with Sitecore, although it wasn’t easy to find out how this could be done. No information of personalization is stored on the rendered objects itself and injecting HTML into the Response wasn’t as easy as expected as well. It turned out that there is quite a nice pattern that Sitecore uses internally to inject renderings after the rendering pipeline, which can be for rendering normal pages as well.

 

 

 

 

 

Presenting at Sitecore Symposium 2016 – Keeping hackers out

$
0
0

At the upcoming Sitecore Symposium, starting on September 15th, I’ll have the pleasure of presenting a session about Sitecore Security –  Keeping Hackers out: Secure Application Development for Sitecore.

Fix vulnerabilities before the bad guys find them

Your Sitecore installation might be hardenend, but that doesn’t mean jack to a hacker. This session will explain the basics on secure application development for Sitecore, to make your websites safe. Using lots of demos, I will show you common security vulnerabilities, their root causes and how to fix them. You’ll learn why you should invest in security, how easy your site may be to hack, how to fix vulnerabilities, and how to write code that’s secure. You’ll walk with source code goodies, free to use.

This is an altered and improved session of my SUGCON session in Copenhagen this year, with some great reviews:

Come and join my session

Come over and join me on my session on web application Security. Astonishing demos and maybe you’ll see a glimpse of Bobby Hack.

Sitecore Security #2: Secure connections and how to force the browser to use the secure connection

$
0
0

Secure connections? Why would I even bother? It’s expensive, slow, complex and I’ve got a firewall anyway? On the SUGCON in Copenhagen I showed off how easy it is to intercept, modify and redirect unencrypted traffic and what can be done against this threat. This blogpost is all about why you should serve your website always and fully over HTTPS and how the browser can be forced to use this HTTPS connection. This blogpost will not show off how to configure HTTPS and will not tell about all the benefits of https. The technique to achieve this is bu adding a HSTS header for your domain, google recently announced that they will introduce this for the complete www.google.com domain as well!

 

Note: Some other great articles have been written about this subject, but I intentionally wrote this article to reach out the Sitecore (and SharePoint) community!

The configuration is included in the blogpost below, it will also be released as a XDT as part of a bigger security project

It’s not only about the login page

As the Sitecore hardening guide describes:

Make the login page available only to SSL requests

This is not true. This guideline suggests that the login page is the only page taht needs to be exposed via https, but every page should be served over SSL. And when SSL is mentioned, the technique that should be spoken about is TLS, Transport Layer Security. SSL is completely insecure, while TLS is the improved version of SSL and is nowadays on version 1.2.

Why is this not true? When people are connected to insecure Wi-Fi networks data can often easily be sniffed, which means, all traffic can be captured by other devices. This means that all the data that flows over the (virtual) wire, can be captured. When the data is send over HTTP, this data is unencrypted and can by the persons who are capturing this traffic. Personal interests, details, and-so-forth can be captured. Imagine what happens when login credentials would be send over HTTP?

Things get even worse, when someone manages to intercept the connection, which is called a “Man in the Middle”. This person would be able to intercept the traffic, monitor it, transform it or even redirect it to another location, while the visitor wouldn’t have any indication on what’s currently happening.

During my session on the SUGCON in Copenhagen, I brought a small device with me which is called a “rogue access point”, which tries to trick devices to automatically connect to this wireless access point. See what happened in a few minutes, before my session even started and almost no visitors were in the room:

But my credentials are posted over HTTPS, right?

That’s true. That might be true. But at the moment that someone would be able to intercept the initial request for serving the loginpage, which would be over HTTP, that person would be able to alter or redirect that request and prevent the initial request from ever landing on it’s initial destination:

MITM

I could redirect traffic to any site, fishing email, passwords, et cetera.

Forcing the browser to use HTTPS

To prevent this kind of attack, even the initial request, must be send over HTTPS. Most visitors enter the address of the website that they want to visit in their browser:

www.website.com

mention that no protocol has been specified. Browsers initially send this request over HTTP and BAM: you’re vulnerable.

To solve this, the “Strict-Transport-Security” header can be used. When this header has been set, the browser will know that they should never visit that website over HTTP, but always use the HTTPS variant of the address. Pretty much all major browsers support this header.

Trust On First Use (TOFU)

Please note that this solution is a TOFU-solution – Trust-On-First-Use. When you have never, ever visited this website, it’s very likely that the website will be requested over HTTP – At this part, the visitor still vulnerable and the request could be intercepted/altered/redirected.  But let’s assume that this first request is safe.

Setting the Http Strict-Transport-Security header (HSTS)

After this initial request, The website should 301-redirect the request immediately to the https-equivalent, together with the following header: Strict-transport-security: max-age=31536000. The browser will cache this value and now knows that for every request, even if it has been initiated over http, the secure equivalent has to be used.

To show off what exactly happens, I included that images below:

The inital request

The initial request is over http and responds with a 301 -> https://insurco

The page gets redirected

The page that is served securely, has the strict-transport-security header added to the response

The next time the user tries to visit the webpage over http

An internal redirect (status code 307) to the secure page, no network traffic is involved in this step

Cool, I want this, how do I do that?

First, a secure connection is needed. After that one, you can configure your proxy to inject this header on every request, or configure the IIS Rewrite module of Microsoft Internet Information Server to use this one. My advice would be to configure this rule on the the proxy, but I you don’t have any access to this proxy or it takes too long, it’s also possible to configure IIS using the following rule. It forces the website to be visited over https and sends the Strict transport security header along with it.

<rewrite>
		<rules>
			<rule name="Http Redirect to HTTPS" enabled="true" stopProcessing="true">
				<match url="(.*)" ignoreCase="true" />
				<conditions logicalGrouping="MatchAny">
				  <add input="{HTTPS}" pattern="off" ignoreCase="true" />
				</conditions>						
				<action type="Redirect" url="https://{HTTP_HOST}/{R:1}" appendQueryString="true" redirectType="Permanent" />
			</rule>
		</rules>
		<outboundRules>
			<rule name="Add Strict-Transport-Security when HTTPS" enabled="true">
				<match serverVariable="RESPONSE_Strict_Transport_Security" pattern=".*" />
				<conditions>
					<add input="{HTTPS}" pattern="on" ignoreCase="true" />
				</conditions>
				<action type="Rewrite" value="max-age=31536000" />
			</rule>
		</outboundRules>
	</rewrite>

 

Securing the Initial request

It is possible, for several browsers, to secure the initial request as well. Most major browsers work with a HSTS preload list. A domain can be submitted to this list on https://hstspreload.appspot.com/ which will put into the preload list of all browsers.

Be cautious! Things may break!

This preload list requires a very important setting for the preload list: includeSubdomains. This setting forces that every subdomain should be served over https as well. If any subdomains exist that are only accessible over HTTP, they will break.

Summary

Serving sites over HTTP is not safe. Although you might only serve content, attackers may use unsafe connections to inject malicious forms, redirect requests, phish usernames and passwords. To force browsers (and thus, their users) to connect over HTTPS, the Strict-Transport-Security header should be used.

 

 

 

Sitecore Security #3: Prevent XSS using Content Security Policy

$
0
0

Clientside code is being used more and more on modern websites. Any kind of resources, for example Javascript, css, fonts, complete pages can be loaded dynamically into websites, from the current website or from an external domain. Attackers might be able to pull off an XSS attack by loading different kinds of data or scripts into your site which will run on your client’s browsers. These injections might happen on your own site, or in external services that you make use of (for example, disquss, or ads you are displaying). Applying a content security policy is one of the defenses against this kind of attack. This blogpost shows of scenarios that might happen (some of them tailored to Sitecore) and how the content security policy can help to prevent a successful attack from happening. As regular solutions provided on the internet do not supply the flexibility that a Sitecore solution (and CMS’ses in general) needs I decided to create a content manageable module and added that one to my SitecoreSecurity module.

This is not a write-up on the complete CSP specification, there are other great sources for that on the web, I included them at the end of the article.

The module will be available on the marketplace when it has passed quality control.
Sourcecode is available on: https://github.com/BasLijten/SitecoreSecurity

The danger of XSS attacks – some scenarios

XSS attacks on itself can be quite harmful, but often the lead to even worse attacks. Imagine what could possibly happen when someone is able to inject custom JavaScript into your website. That attacker is able to run code on the clients machine, which could lead to several situations. The least harmful is showing an alert:

From a business perspective, this is a situation that you don’t want to appear on your site. The complete site could even be defaced.

A more harmful situation is that a user might get control over your browser, without even noticing it. They would be able to load an external JavaScript and take over control of your browser. With the current HTML5 capabilities being able to use the camera, this could lead to a situation that the attacker can take videos or pictures remotely:

I even showed this off during my SUGCON presentation – Sitecore might be secure, but your site isn’t:

Worse things happen, when other parts of your security aren’t in place as well, for example, secure cookies (when working over HTTPS and yes, you should always do that), http-only cookies or session management. I showed off a scenario at the SUGCON where session fixation was possible, due to an XSS vulnerability in combination with bad session management. In this case, Bobby Hack (identity: extranet\robbert) was able to view personal details of me, Bas Lijten:

What did these hacks have in common?

All hacks had one thing in common: the attacker was able to inject JavaScript into the page via a query string parameter that got interpreted or to load a malicious external JavaScript file that got executed.

A very simple, yet effective way to prevent reflected XSS attacks, is to apply correct input and output validation. Don’t trust your external actors, never! But when that line of defense is broken, or some external service that you use has a broken security layer, other defensive mechanisms have to come in place. This is also known as “defense in depth”: never rely on a single line of defense.

How can the content security policy help in here?

With the CSP, policies can be defined on what kind of resource is allowed, how is it allowed and what sources are trusted? The resources are split amongst different categories, for example, default (your fallback/main policy), script, images, styling, et cetera. Per resource different options can be applied:

None Prevents loading resources from any source
All Allows all options
Self Allows loading resources from ‘same-origin’
Data Allows loading resources via de the data scheme (base 64 encoded images)
Unsafe-Inline Allows the use of inline-elements; style, onclick, script tags, et cetera
Unsafe-eval Alls unsafe dynamic code evaluation, such as the JavaScript eval() function
Hosts The external hosts that are allowed to load content from

Not every parameter is available in each resource-group, it depends on the type of resources. Due to the categorization for each resource-type, flexible policies can be created to allow javascript to be loaded from *.googleapis.com, disable inline-script and disable the unsafe-eval functions, while the css may only be loaded from the same-origin and Insafe-unline is allowed. A policy like that looks like:

Content-Security-Policy: default-src 'none' ; script-src 'self' 
https://*.googleapis.com; style-src 'self' 'unsafe-inline' ;

Please note that all major browsers support the content security policy header, some in the form of the normal header, some need the X-Content-Security-Policy headers. Some browsers do not support all resources as well.

Creating a CSP

To create a CSP by hand can be very time-consuming. You’d need to know about the exact specifications (a script policy contains other parameters than a CSS policy, while the referrer section looks completely different). Is the option ‘none’ with or without quotes, is the option ‘self’ with or without quotes? How to specify the allowed hostnames? Scott Helme, security specialist, created the website report-uri.io, on which he hosts security tools and one of them is to create those CSP’s. Go play around with it and see how stuff works ;).

For Sitecore, I created a module to do this, as multiple policies may need to be served and maintained per site. These CSP’s are manageable from content without too much effort, more on that later

Testing a CSP

When configuring a CSP, the policy can be created and be applied to the page. The next image contains the out of the box Sitecore landing page, with the following CSP applied.

Content-Security-Policy:default-src 'none' ;script-src 'none' ;style-src 'none' ;
img-src 'none' ;font-src 'none' ;connect-src 'none' ;media-src 'none' ;object-src 'none' ;
child-src 'none' ;frame-ancestors 'none' ;form-action 'self' ;manifest-src 'none' ;

One of the lessons learned: Misconfiguring the CSP can seriously damage your site! The good thing is, that the Console exactly shows what is going wrong:

While this information can be used to test and modify existing policies, it’s not convenient to do this with a broken site. Luckily, there is a solution for this one: The Content-Security-Policy-Report-Only header. When the same policy is used with this header and the original CSP is deleted, the site will still work (unprotected!), while the new policy can be tested:

A mixture of a CSP and CSP-Report-Only will work together: Using this mixture, your site remains protected, while the modifications can be tested. Of course, different pages, with different policies will behave differently on different browsers (and how many times can the word ‘different’ be used in one sentence? I need to differentiate more ;)). See Akshay’s blogpost on the browser differences for more info! This can be pretty hard to test and monitor in different circumstances. And of course, there is a solution to test and monitor these policies.

Reporting CSP violations

To monitor violations (which can be malicious, or might be due to an error) is not too hard: the CSP specification contains a report-uri field which can be used to send to violations to. The website report-io, which I mentioned earlier, can also be used as endpoint to collect the CSP violations. I configured my site to report it to this service:

In this specific example, some enforced policies are shown (a filter on report-only didn’t work, as I didn’t have any report-only errors at the time). The first policy which is shown, shows a violation on the script-src resource policy: because of the use of the unsafe eval function, this violation happened. Another example is the 4th one, where an image violation has taken place. I configured my policy to only allow images from my own domain, while the site tried to load images from an external domain.

Seeing a CSP in action!

In my example I referred to a XSS hack that I showed during the SUGCON. This hack involved loading a malicious, external JavaScript that was injected into a trusted subsystem. This Javascript was loaded on my page, which caused to remotely take a picture of me using my webcam:

When the CSP is applied, this leads to the following behavior: the requests to the domain that is not whitelisted get blocked by the CSP. This mitigation prevented a malicious user from taking a picture of me.

Introducing Convenient Content Security Policy for Sitecore

While most blogposts on the internet learns us how to configure IIS to use CSP (by adding a header in the web.config), this doesn’t work too well for Sitecore. Each page might need another CSP (although I wouldn’t recommend that one) and how would a multi-site setup be configured? Would every CSP have to be created on an external site, copied over, verified, published, et cetera? While this would work for the first times, this wouldn’t work over time.

I created a template which contains all of the CSP 1.0 and 2.0 options (yes my friends, next surprise, there are multiple versions ;). Using this template, separate policies can be created:

Another template, the Content Security Policy link, which contains the CSP Link and the CSP Report Only link, can be used as one of the base templates of a page ( I would prefer to have one single base content page template to which all the generic templates can be assigned). This results in the addition of two extra fields on your pages:

Using this mechanism, CSP’s can be reused easily, be put on standard values, be updated easily, et cetera! This results in the following extra response headers in your request.

This module adds the X-Frame-Options header as well, based on the frame-ancestor setting for your CSP. It should lead to the same result, so it’s no use to configure this one separately and possibly differently. This would lead to strange browser behavior. Especially with the introduction of the X-Frame-Options header in Sitecore 8.1 update 3 – out of the box the X-Frame-Options: SAMEORIGIN header would be send in the response: this would conflict with this CSP policy.

Summary

The Content Security Policy header is an excellent mechanism to defend against XSS and other code injection attacks. While the CSP itself can be quite big, confusing and even break your site, there are several tools and patterns to aid you in the journey to a safe website. With the introduction of the Sitecore Security module, the CSP’s can even be managed from within your content on a per-site or even per-page basis!

 

 

 

 

 

 

 

 

 

 


Sitecore Security #4: Serve your site securely over https with Let’s Encrypt

$
0
0

In a previous blogpost about the Http Strict Transport Security I explained how to force connections to make use of https to encrypt connections. A lot of people think it’s expensive, hard to implement and slow. This blogpost shows off how you can get a free, secure certificate, get your Sitecore site up-and-running in no more than 5 minutes, just by using the Let’s Encrypt service. Source-code can be found here on Github.

Why https and what’s needed for that

Basically, the only thing that you need to serve your site is a certificate which has to be updated once in a while. Of course, your site has to be built in such a way that it can be served fully over https, or you won’t make the most out of it. (for example, no mixed content).

The reasons to switch over to HTTPS are numerous:

  • it’s safe – with a man in the middle attack, your data can’t be sniffed, manipulated or redirected
  • it’s faster – when used over http/2. Windows server 2012 doesn’t support this, but Windows server 2016 In my next blogpost I’ll show off some results
  • It’s better for SEO – When a page is served over https, google increases the page-rank for that page

But why are there so many sites not using https?

Costs – On a lot of different places, those certificates are offered for $10 to $200 dollars per year.

Extra work – Those certificates are send over mail (boohoo), should be downloaded manually or any other manual interaction is required to obtain and install these certificates. I promise you: This *will* be forgotten. One, two, ten certificates will be mangeable, but when you’re about to administer more-and-more certificates, things will be forgotten. Eventually, this will cause your site to break, as your certificates are out-of-date

Hard – Where should I store them? How can I import them to IIS? Who should do that?

Meet Lets Encrypt

Lets encrypt is an online service which has the mission to serve the complete web over https. To achieve this, using https has to be as easy as possible, that’s why they are using the following principles:

  • Free:Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic:Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure:Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent:All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open:The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative:Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.

How does this work?

There are a lot of tools to get Let’s Encrypt to work with windows and IIS. Troy Hunt wrote a great blogpost on how to set Let’s Encrypt up with Azure web apps. The short story on how Let’s encrypt works is:

  • Validate the domain – if the website is accessible via the domain that you want to create certificates for it must be your domain.
  • Let’s Encrypt creates a token – that needs to be accessible via your website on that same domain
  • Place that token on your website – Let’s Encrypt must validate that token. If it’s accessible, the certificate can be created
  • Store and bind the certificate – Store the certificate in the certificate store, bind it to your domain in IIS and you’re ready to go!

One of the tools that’s available to get let’s encrypt to work with IIS is Let’s encrypt win simple. It offers an executable which lets you select a host to create a certificate for, install it to IIS and automate that process which is included above.

After the selection of the correct host, the tool tries to validate your domain and tokens:

Because of the protocol, the token is expected to be available via <yourdomain>/.well-known/acme-challenge/xxxxxxxxxxxxxxxxxxxxxx.

Get it to work with Sitecore

Out of the box, this doesn’t work with Sitecore, as the Sitecore (well, MVC) prevents the item from being rendered.

Using a simple web.config in this directory, fixes the issue. In my Security module on github, I included this web.config which automatically get’s deployed. It work’s multi-site, so it’s accessible via any site that you configure, no extra work is needed!

After validation by the let’s encrypt tooling and positive feedback, the certificate is automatically imported, put into the right store and binded to your IIS application. This results in a website that’s being served over https:

Summary

Securing your website has become much more accessible since the introduction of let’s encrypt and their automation tools! Using the my Sitecore Security module on github, the use of let’s encrypt is accessible as for any other .net application.

 

 

Getting started with sitecore: The 101-guide to the community

$
0
0

A few years back, back in the Sitecore 7.x days, I started to work with Sitecore. I originated from the SharePoint community (take note of the capital “P”!), where there are SO many active bloggers. I think this was caused by a bit of the history. “Back in the SharePoint 2007 days” all the SharePoint info we got, came from google, or from reflector, as the documentation wasn’t always “that well written”. It appeared that there were a few persons actively blogging about their findings and through the years, the amount of people actively blogging, writing cool code or helping each other out, exploded, but you had (and still have) to find your ways to find all the information.

I see the same pattern happening in Sitecore. A lot of great functionality, a great product, but not every feature is always documented. As everyone tries to get the most out of the platform, people are seeking the boundaries of the product and finding out how stuff works. A lot of people are looking for help, a lot of people are blogging, but it’s not always that evident to find the sources that you need. “Where is the community?” you might ask. And that’s exactly why I decided to write this blogpost.

A first free lesson: First lesson: SiteCore is written as Sitecore. Please take 
care of this, as most Sitecore community members are a bit sensitive to it ;).

Sitecore and the Community whereabouts

There are a few different places where the community gathers around, so if you are looking for information, try these places!

Your own blog!

Without people like you, a community doesn’t exist! Did you do something cool, noteworthy or do you just want to share your experiences? Write it down! Other people invest (valuable) time in their blogs to share their information with you, so why wouldn’t you do the same? To me, it started out to write things down that I shouldn’t forget. Why not share those notes with the community? Of course it may be hard to start, so check out the places below to get started.

Twitter

Engage actively in the Sitecore community by following this hashtag and account and post to twitter by using the #Sitecore hashtag

  • Follow the #sitecore hashtag in tweetdeck.twitter.com and you’ll read the latest news.
  • Follow @Sitecore – now explanation needed 😉

Sitecore Slack

https://Sitecorechat.slack.com – a very active community, although that the slack isn’t publicly available. Send a DM on twitter or LinkedIn to one of the following members to get access:

  1. Akshay Sura
  2. Michael Ian Reynolds
  3. Kamruz Jaman
  4. Adam Najmanowicz
  5. Nikola Gotsev
  6. Mike Edwards
  7. Johann Baziret
  8. Anis Chohan
  9. Richard Seal
  10. Dan Solovay
  11. Nathanael Mann

More info on Sitecore slack? Visit this page to find out how to join this community!

Community.Sitecore.net

https://Community.Sitecore.net – A forum hosted by Sitecore where community members can ask their questions regarding Sitecore

Sitecore on stackexchange?

Do you really want to start with your first contribution to the Sitecore community? Signup and commit yourself to the Sitecore community. We’ll get our own http://sitecore.stackexchange.com!

Join Conferences and usergroups

Tere are several usergroups througout the world and several yearly sitecore conferences. A few examples:

Usergroups

Noteworthy places to find information

Sitecore blog feed

http://feeds.sitecore.net/Feed/LatestPosts – a sitecore feed managed by Sitecore with interesting topics

Sitecore technical blogs

http://www.sitecore.net/learn/blogs/technical-blogs.aspx  – A list with a lot of (skilled) bloggers, but not complete, as a lot of new technical bloggers have been joined the community the last years

Blogfeed on Slack

https://sitecorechat.slack.com/messages/blogfeed/   – a broad collection of blogs, updated every minute of the day!

Summary

Getting involved into the community isn’t hard, you just need to find the right places. Find us on twitter, find us on slack and find us on the public forums!

Sitecore 8.2 update 1: Azure deployments, ARM, Web Deploy and the Sitecore Azure Toolkit

$
0
0

With the release of 8.2 Update 1, Sitecore also introduced support for Azure Web Apps. This release is, in my opinion, a major step for Sitecore as this update makes it very convenient to deploy to Azure using the Azure Marketplace or the provided PowerShell scripts, that’s why I think that this release is even bigger than Sitecore 8.2 initial. This deployment pattern is an interesting pattern to use on premise as well, although not all of the services can or should be used on premise. This blogpost describes how the Sitecore Azure Toolkit works. My next blogpost will describe how to use this toolkit to create your own custom web deployment packages, both for Azure and your on premise installation, with even older versions than Sitecore 8.2

Note: be careful when deploying to your own Azure subscription: when managed incorrectly, a Sitecore deployment on Azure can cause Azure to provision an extensively scaled environment, which generates many resources. Be careful as the cost of this could be high.

Update: modified the blogpost slightly thanks to excellent feedback from Rob Habraken, Steve McGill and Michael West. Thanks guys!

How Sitecore gets deployed to Azure

There are a few different ways to deploy Sitecore to Azure, but two of them make the most sense

  • Deploy from the Azure Marketplace
  • Deploy using PowerShell

Using the Azure Marketplace, a one-click-deployment is initiated (I won’t go too deep into how to provision this app, as Praveen Kumar Sreeram already blogged about it) :

When deploying this app, just a few parameters are needed which have to be provided during the creation steps. The required parameters are limited to Database settings, resource group name, license file and … the admin password!

 

This app provides a basic Sitecore 8.2 XM installation: no XDB whatsoever. The cool thing about the technique about this provisioning method is that it uses the same techniques as I describe in this blogpost: ARM templates and Web Deploy packages.

Deploying using PowerShell using the Sitecore Azure Toolkit

Another mechanism that can be used to deploy to Azure is PowerShell. Sitecore provided the Sitecore Azure Toolkit, which contains a few beautiful gems, but more on that later. This package contains a few folders:

  • Copyrights
  • Resources
  • Tools

The resources folder contains an 8.2.1 folder (which implies that support for other versions will be added in the future) with 3 folders: CargoPayloads, configs and msdeployxmls.

The tools folder contains a few assemblies and a PowerShell module, which provides three different public Cmdlets:

To deploy a new Sitecore environment to Azure, the Start-SitecoreAzureDeployment can be used, however, it requires a few parameters, with the most important being the ARM Template path and the ARM Parameter file:

ARM-what?

ARM-templates are Azure Resource Management templates that allow you to deploy an application using a declarative template. With a single template, it’s possible to deploy multiple services with their dependencies. It’s a JSON file in a specific structure which describes dependencies of services, their names, usernames, passwords etcetera. I won’t go too deep into the current implementation and how Sitecore used it, as they did a solid job on providing these. If you want to get more knowledge on ARM, you can get a lot of information on here. The templates themselves can be downloaded here.

This template needs to be fed with parameters to be able to do its job:

 

The template above, which is used for the XP1 provisioning, needs the Sitecore admin password, MongoDB ConnectionStrings, SQL Server credentials, the license file and the packageURL of the web deploy packages, in total 4 of them: for Content Management, Content Delivery, Reporting and Processing. . Interesting to note is These packages can be downloaded on dev.sitecore.net. In a classic (on premise) situation we were always forced to create role packages ourselves. How many people created PowerShell, batch files or other smart solutions to generate those role packages for us? Well, Sitecore created tooling for this job and I am pretty positive about it! And the fun part: this tool can be used to create your own packages, which can be used to deploy to Azure and on premise!

How are Web Deploy packages created?

A Web Deploy package 101: this is a package that can be deployed to Azure or IIS using MSDeploy and contains the website files, database files and a lot of other information. In this Web Deploy package, a parameters.xml can be supplied as well, which describes what parameters can be provided during deployment. Using this packaging technique, the same deployment package can be used for different environments, while providing different parameters for every environment. These parameters can be supplied by specifying the values in the msdeploy command, using a setparameters.xml or by using an ARM template.

The Sitecore Azure Toolkit contains a cmdlet to create Sitecore Web Deploy packages. One very important note:

How the Sitecore Azure toolkit creates web deploypackages is NOT a standard way of doing this. Usually, these packages are created using MsBuild during build, but Sitecore had to create this alternative, as it would become way to complex to create web deploy packages and maintain flexible configurations. Cargo payloads, the common and sku config are terms that Sitecore introduced. 

It requires the Sitecore package and some locations in which specific configurations are stored:

  • Cargo Payload
  • Common config
  • SKU Config path
  • MSDeploy XML

The configuration in this location is needed to create the new role-based Web Deploy packages: the standard zip of the webroot (Data/Databases/Website) will be converted to a Web Deploy solution which can be deployed to Azure or your local IIS environment and all the configuration required, per role, as described on docs.sitecore.net and in the installation notes, will automatically be applied. But how?

Basically it works this way:

  • skuConfig path: defines the kind of configuration for which the Web Deploy packages need to be created (in our case, XP1). The contents define which configs to apply.
  • commonConfig: the config that always needs to be applied, in any case. The contents define which configs to apply.
  • Cargo payload folder: the set of actions that need to be applied to add, for example, Application Insights support.
  • Archive and ParameterXML Path: the manifest and parameters.xml that need to be part of the Web Deploy packages.

In a “normal” situation these changes would have to be made manually or using own PowerShell scripts. These actions always consisted of:

  • Enabling/disabling patch files
  • Adding/removing files
  • Changing configuration

And that’s exactly what this PowerShell does for us.

Common and SKU config

Sitecore introduced the common and SKU config to be able to design roles and supply the required archive.xml, required parameters.xml and the cargo payload actions that need to be applied for that role. Those Cargo payloads are specified by the sccpls. sccpl is propably an abbreviation for SiteCoreCargoPayLoad.

Cargo Payload

The cargo payload is something that was introduced by Sitecore, to define transformations for existing web deployment packages and is, thus, not part of the Microsoft msbuild toolkit but is delivered with the Sitecore Azure Toolkit. It defines a set of configuration changes that can be applied to the Sitecore Web Deploy package and are tied to a central theme. Those themes are specified in the common and sku config and may contain one or more of the following themes

When unzipping these files, a few different folders are shown, which are tied to some actions.

  • copy to root contains an action to copy the items to the root of the web deployment package (for example SQL provisioning scripts, I will get back to this later)
  • copy to website contains files which need to be copied to the site root
  • XDTs are XML Document Transform files (configs to transform, for example, the web.config or any other XML.)
  • IoActions: an xml which contains information on patch files: which one needs to be disabled or enabled

These are actions that Sitecore defined which can be used to transform the data in this folder into the Web deploy package. For example, the Sitecore.Cloud.ApplicationInsights.sccpl contains two actions: Copy To Website and XDTS. using this technique, the “vanilla” on premise Sitecore installation does not contain Application insights, while it can be added to the packages that have to be used on azure. The copy To Website contains all patch-files and the Microsoft.ApplicationInsights binaries, while the XDT contains logic to alter the web.config and to add configuration to the connectionstrings.config

Below is an example of the IoAction file:

 

Archive and Parameters

The archive and parameter.xml are two files that need to be copied to the root of the web deploy package. This is an example of the parameter.xml of the CM role:

It contains a lot of parameters which may be different on every deployment and/or every role, that’s why they need to be parameterized ;).

 

Taking a look into the Web Deploy packages

Using the Start-SitecoreAzurePackaging cmdlet, the Web Deploy packages will be created. For the XP1 SKU, this will result in 4 different kind of packages: CD, CM, PRC and REP. The paths to these packages can be supplied to the config file, mentioned earlier (and I will get back into that).  When taking a closer look INSIDE the packages, the following structure can be seen (in this case the CM version). I will highlight a few observations:

Directory structure

As opposed to the default structure (Data, Database, Website), a totally different structure can be seen. Inside the Content folder the actual website can be found, while /data and /databases seem to be missing.

The /database folder is not available anymore: the database will be provisioned using the dacpac files, while the *.sql scripts are being executed after provisioning. References can be found in the parameters.xml that I shared previously. A nice addition is that the SetSitecoreAdminPassword.sql will be executed after installation: this means that the default password “b” will be overwritten!

Parameters.xml

As described previously: this file describes which parameters are required to deploy this package.

Website and the /data folder

The actual website can be found in the /content folder. When taking a closer look we find that the contents of the old /data folder is located in /app_data. The reason for this is probably that only website roots can be deployed to Azure, thus a /data folder wasn’t an option anymore. This means that your license.xml will be deployed to the /app_data as well.

When a plan comes together

In this (lengthy) blogpost I talked about the ARM templates, their parameters, the required packages that needed to be created and how these parameters worked. Some parameters need to be specified in the ARM parameters template, such as SQL username, password and the Web Deploy packages, which can be used to deploy to the correct instance. The ARM template in itself deploys the different Web Deployment packages to the different Azure Web Apps and takes care of all the required parameters in the parameters.xml. This toolkit can be used to create custom configurations and opens up opportunities to include specific customizations in your baseline!

 

Sitecore on Azure: Create custom web deploy packages using the Sitecore Azure Toolkit

$
0
0

In my previous blogpost I described how the Sitecore Azure Toolkit works and how to create web deploy packages. In this this blogpost I’ll explain how to create your own web deploy package configurations which can be used on Azure and on-premises, even with Sitecore versions older than Sitecore 8.2 update 1. You can apply role specific configurations, or add custom modules like Coveo, PowerShell Extensions, Unicorn, or even one you package up. Using these techniques will help you establish a repeatable process with standard tooling leading to decreased deployment time. How cool would it be to have Continuous Delivery and Deployment all the way to production?! I’ll demonstrate in an example, Sitecore PowerShell Extensions, how to work towards a continuous delivery process. As a bonus I’ll package Unicorn as well – future posts will depend on this example so why not tackle them now.

special thanks to Rob Habraken, Michael West and Kam Figy who reviewed this post!

Use Web deploy to add new functionalities

Deploying Web deploy packages is the way to go to deploy your Sitecore files to Azure and, in my opinion, to your on-premises IIS installation. In the future I will shed light on my thoughts about that subject. A cool feature of web deploy is, is that you can parameterize environment-specific variables for your deployments. In the current WDP’s provided by Sitecore, they have parameterized the Sitecore admin password, database connection strings for core, web and master, basically every parameter that differs on each environment.

When looking from a Helix perspective, the following layers are available.

The current Azure toolkit, provided ARM templates, and web deploy packages allows you to deploy Sitecore + its infrastructure; this makes up the Foundation. The Feature and the Project still need to be deployed. One option is to simply FTP the files, however there are some drawbacks to that. Web deploy is surely the way to go. The process to create web deploy packages is fairly straightforward when using the parameters.xml to parameterize your deployments.

There is one “grey” area: Sitecore modules in this Foundational layer.

I truly believe that when you are deploying your projects, you should clear the complete web application site root, redeploy the foundational modules, the features and on top of that the projects. A big challenge with this approach however includes the following:

How should Sitecore module packages be deployed? The use of web deploy packages does not include installation modules such as Sitecore PowerShell Extensions, Coveo, WFFM.

Difference between initial provisioning and continuous delivery

The initial provisioning of these modules still needs to be performed using the Installation Wizard, as web deploy can’t deploy the content of Sitecore packages (yet) and can’t trigger post deployment steps. But when the initial content has been provisioned and the configuration has taken place, the binaries need to be redeployed over-and-over again, there’s no content involved anymore in this process. In the past I saw solutions where people would redeploy these packages using Sitecore ship, which took, in my opinion, too much time; this overhead wasn’t needed at all. But how can we get those files onto the platform without using Sitecore packages? Naturally, Webdeploy is the answer for that 😉

Create webdeploy packages from Sitecore update packages using the Sitecore Azure Toolkit

The Sitecore Azure toolkit offers the ConvertTo-SitecoreWebDeployPackage cmdlet which let you create web deploy packages from Sitecore module packages:

This is a very basic webdeploy package which quickly can be deployed to your (Azure environment) on top of your Sitecore installation. No parameters can be specified, no config files can be altered. It’s all or nothing. Of course it’s possible to open this zip yourself, alter the configuration files and insert the parameters.xml. Who wants to do that? It’s a very interesting option to use if none of those changes are needed.

An alternative is to create a cargo payload and add it to your Sitecore webdeploy packages. With a small investment of your time you will create modules that can be installed over-and-over again on different environments, without having to alter any of the patch-files anymore.

A small recap on cargo payloads and configurations

The Sitecore Azure Toolkit provides a PowerShell command to create web deploy packages from existing Sitecore module packages. This command requires configuration files to determine what kind of web deploy packages need to be created. These configuration files point to cargo payloads: small blocks of functionality which can be added to the web deploy packages. Examples include Application Insights binaries, Azure Search functionality settings, or role specific configuration. Of course it’s possible to create own cargo payloads, that can be used in your own configuration.

How to add your own cargo payload

The cargo payload defines a set of actions/transformations that should be applied on the web deploy package. It’s a file which contains a sccpl extension, “Sitecore Cargo Payload”, and is a standard zip file. The CPL’s that are delivered with the Sitecore Azure Toolkit expose 4 actions:

  • CopyToRoot: copy files to the root of the web deploy package, such as dacpac files and sql scripts. These are typically files that are needed for provisioning
  • CopyToWebsite: copies files to the website root – the files will be deployed to the Sitecore web root using webdeploy
  • IOActions: xml file with actions to enable, disable or delete configuration files.
  • XDTS: folder with Xml Document Transformation files. These transformations can include changes to the web.config or other xml-files like every file in the include folder

In the common.config and sku config files get defined what CPL should be applied to what role. The common.config is used for all packages, while the sku.config defines the role specific transformations.

Creating your first cargo payload

Let’s start with the Sitecore PowerShell Extensions module as an example. It contains everything: Content, configuration, binaries and parameters that may differ per environment. We also want to only install it on the CM. It’s a prerequisite for SXA and let’s face it, every solution should make use of it especially for continuous deployment.

The starting point for these foundational modules, which come in the form of Sitecore module packages, is the previously mentioned ConvertTo-SitecoreWebDeployPackage command from the Sitecore Azure Toolkit. It basically takes all the files from the files folder and converts that one to the most basic possible web deploy package:

This package can easily be converted to a Cargo Payload. Create a new folder with the name “BasLijten.PowerShellExtensions” and create a CopyToWebsite folder in it. Drop all the files from the WDP into this folder. That’s it!

Adding transformations and parameters

The next step would (probably) be to transform it to your needs. It’s important to keep those files as vanilla as possible: this will make future module upgrades easy, Next to that, you need to keep track of your own customizations. By using XDTs (Xml Document transformations) your customizations can be applied on vanilla patchfiles, without having to modify those patchfiles.

The most important question is always: what configuration is generic to my environments and what would be specific to a server/set of servers? In my situation we are hosting multiple Sitecore instances which we want to be configured the same, but have instance specific parameters. To clarify this, I’ll take the security settings for the Sitecore PowerShell Extensions as an example:

I want the the remoting service Enabled and requireSecureConnection to be configurable on each environment (thus specific) and I always want to deny Sitecore\admin permissions on each server. This means that I should add the <add Permission=”Deny” IdentityType=”User” Identity=”sitecore\admin” /> line using XDT (so I am not touching the source) and I should use parameterization for the remoting attribute change.

The XDT

Inside the BasLijten.PowerShellExtensions folder, create a folder called “XDTS”. To apply an XDT to a patch file simply recreate the same folder/file structure to that file. In this case, the patch file resides at the location “App_Config\Include\Cognifide.PowerShell.Config”. This means that the same folder structure in the XDTS folder needs to be created. To be able to apply the XDT transformation to “Cognifide.PowerShell.Config”, the file “Cognifide.PowerShell.Config.xdt” has to be created in that folder. The transformation will be applied automatically after creating the WDP. For the sake of completeness, this is the XDT that I used:

The parameters

To be able to disable or enable remoting deployment time, this needs to be a parameterized variable. Please note: you are not working in the CPL folder anymore, but we are going to add a custom configuration. The addition of this parameter is quite easy:

  • Make a copy of \resources\8.2.1\msdeployxmls\xP0.parameters.xml
  • Rename this copy to MyFirstWDP.parameters.xml

Upon opening this file you will see a lot of parameter entries in this file. These entries are all definitions for the parameters that should be used:

When watching this file closely , there are entries which are used for SQL, text files and XML files. Let’s take the Cloud Search Connection String as an example:

A parameter with the name “Cloud Search Connection String” is defined. The parameterEntry describes how that parameter should be applied. In this case, it’s an XML file.Tthe scope defines the location of this XML file and the match is an XPATH query to locate the right node.

Below is the snippet that I used to make the remotingEnabled and RequireSecureConnection parameterizable:

When a plan comes together – create your web deploy package

Our goal is to apply the CPL’s to the web deploy packages. To make this happen, two steps are needed:

  • Create a sccpl from the CPL folder
  • Create a custom configuration to apply the right CPL’s

Create a sccpl

I first fiddled around by manually zipping that directory, rename it to sccpl and try to see if it worked. Turned out that I made quite some mistakes. Aside from the manual mistakes, it’s boring work, that’s why I created a PowerShell script to automate it. If I had taken a look at the commands that are provided by Sitecore, I wouldn’t have wasted much time. Import-Module Sitecore.Cloud.CmdLets.dll is all you have to do:

This library exposes the New-SCCargoPayload command, which creates the sccpl in a correct way for you.

Create a custom config

In this example, I will modify the config for the XP0 seat, which contains all the roles, but any config can be used

  • navigate to the \resources\8.2.1\config folder
  • copy xp0.packaging.config.json
  • rename the copy to MyFirstWDP.packaging.config.json
  • open it.

You’ll see the following contents:

  • change the parametersXML value to MyFirstWDP.packaging.config.json
  • add your new sccpl to the sccpls array and you’re good to go.

Create and install the web deployment package

The final step is to execute the Start-SitecoreAzurePackaging with the correct parameters:

Start-SitecoreAzurePackaging -sitecorePath ‘C:\shared\Sitecore\repo\Sitecore 8.2 rev. 161115.zip’ -destinationFolderPath C:\sitecorewdps\custom -cargoPayloadFolderPath .\resources\8.2.1\cargopayloads -commonConfigPath .\resources\8.2.1\configs\common.packaging.config.json -skuConfigPath .\resources\8.2.1\configs\custom.packaging.config.json -archiveAndParameterXmlPath .\resources\8.2.1\msdeployxmls

This will create a customdeployable Sitecore web deploy package which can be deployed to azure. When taking a look inside the WDP, we do see that the XDT to add a rule to block remoting access for the Sitecore admin got applied and that the default remoting settings (remotingEnabled=false) are still there:

I will not go into the details on ARM, as Rob Habraken will do that. And as I don’t want to re-provision the apps (and thus creating new databases) using the ARM templates, I just want to redeploy the contents. This can be done using msdeploy. All we need to do is supply the parameters using a set parameters.xml:

It’s basically specifying the parameter-name and the value that you want to provide. In this setparameters.xml I specified the remoting service attributes “enabled” and  “requireSecureConnection” both set to be true.

Below is the batch file to deploy web deploy package locally:

Please note: this will probably not work with your own Azure deployment, as I modified my WDP a bit more, but those changes go too far for this blogpost: I will go deeper into that subject in my next blogpost on provisioning and (continuous) deploying Sitecore.

Conclusion

Sitecore finally created a tool which can be used to configure and assemble role specific packages with own or community modules. As it is Sitecore version agnostic and, although it’s an azure toolkit, it can be used to create packages for on premise, I am definitely going to use this at our own company. To create a custom cargo payload, very little effort is needed, while adding it to a custom configuration is little work as well. I would advise to start using this toolkit right away to create your custom role specific web deploy packages and use web deploy to deploy your Sitecore environments, be it on premises or on azure. Fun fact: I wrote this blogpost without deploying to Azure once, due to lack of credits J

Use the Sitecore Azure toolkit to deploy your on premises environment

$
0
0

Let’s face it: a lot of customers won’t deploy to Azure immediately, but will have a migration to Azure on their roadmap for the next year. It’s wise prepare as much as possible to make the transition smooth. This blogpost shows off how what the differences between the current Azure and classic on-premises are and how to create custom web deploy packages for your on premise environments, to be in line with a possible future upgrade to Azure. It will make your local deployments repeatable while making use of Microsoft standards. Additional advantage: Your (initial) deployments may happen faster!

See the video below where I explained what I did

The biggest difference between classic deployments and Azure

In classic deployments we do recognize the /data, /database and /website folders in the root of IIS. This choice was made in the past to be sure to keep the /data folder not accessible via IIS. In Azure, however, this is not possible: the choice has been made to move this /data folder into the /App_data. This will also be the “new” location of your license.xml. Some 3rd party modules, such as Unicorn, also have their data-storage in /data, these should be moved to /App_data as well.

Preparations: Create dacpac files for your databases

In the past, we had to manually attach the databases to SQL before Sitecore could be enrolled. With Sitecore 8.2.1 Sitecore also delivers dacpac files to deploy databases. A new directory, called DACPAC, has been added to the database folder in the Sitecore site root folder. This DACPAC isn’t used while executing a “classic on-premise” deployment, only when deploying to Azure.

DAC is A data-tier application (DAC), a logical database management entity that defines all of the SQL Server objects – like tables, views, and instance objects, including logins – associated with a user’s database. A DAC is a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC.

This technique can be used on premises as well, even with older versions of Sitecore. The only thing that is missing are the DACPAC files, but they can easily be created:

  • Install a fresh Sitecore instance (with the version that you want to deploy using web deploy)
  • Open up visual studio
  • Connect to the database using SQL Server object explorer
  • Right click on the database that needs to be exported as a DACPAC
  • Make sure to select “Extract Schema and Data”
  • Export
  • Repeat this for all four databases

Create a Web deploy package

The following steps have been extensively explaind in my previous blogpost, so I will go rather quickly through it.

As this will be a web deploy package used for on premise deployments, no cloud cargo payloads are needed. This means that new configurations have to be created. Pre 8.2.1-packages are lacking DACPAC files and this deployment setup needs a datafolder change, thus a new cargo payload has to be created for this setup. As the cherry on the Apple sauce (a dutch saying, which means: “The icing on the cake”) the Sitecore 8.2.1 technique to set the admin password during setup will be included as well!

Create the cargo payload

The first step is to create  a new cargo payload which contains the DACPAC files, the SQL script to set the admin password, the patchfile for the datafolder and the license.xml. With the ARM templates in Azure this license.xml can be provisioned, but I didn’t find a way to dynamically include this license.xml in web deploy packages yet. A cargo payload with the following files and structure has to be created:

  • CopyToRoot
    • Core.dacpac
    • Web.dacpac
    • Master.dacpac
    • Reporting.dacpac
    • sql
  • CoptyToWebsite
    • App_Config
      • Include
        • config (with \App_data as datafolder)
      • App_Data
        • xml

Run the New-SCCargoPayload command to create the sccpl and you’re good to go.

Create the custom config file for your web deploy package

I created two new configuration files: one common.config and one singleRole.config which are used to create the web deploy package. No Sitecore.Cloud references should be left in heren, only the BasLijten.SIngleRole.sccpl should be included. In these files it’s possible to make different configurations for different roles!

Alter the archive.xml and parameters.xml

The archive.xml and parameters.xml will be copied into the web deploy package by the tooling. I copied an archive.xml and removed the entries of some .sql files that aren’t available. The parameters.xml defines what parameters are required to provision the application. An example is the following snippet:

3 parameter entries are defined. Each Parameter entry tells what to look for (ProviderPath, XmlFile, Textfile), what to look for and what to do.

  • Sitecore Admin New password – replace the PlaceHolderForPassword string in the SetSitecoreAdminPassword.sql
  • Core Admin Connection String – make sure that the Sitecore.Core.dacpac gets provisioned to this database and execute SetSitecoreAdminPassword.sql on this database. As the PlaceHJolderForPassword was replaced by the provided password, the new password will be stored in the database. “Bye Bye b” :D.
  • Core Connection String: replace the core connectionstring by the provided value in the XML file “connectionstrings.config”

The complete parameters.xml that I used can be found on Github.

Create the actual package

The last step that needs to be taken is to run the Start-SitecoreAzurePackaging command (as described in my previousblog) which creates the new Web deploy package for Sitecore 8.1.3

Deploy Sitecore using webdeploy

At this point, a Sitecore webdeploy package with the 8.1.3 database dacpac files, the Sitecore root, the license.xml, datafolder patchfile has been created. All you need to do, is to create a setparameters.xml which provides all (required) parameters and start provisioning:

The next step is to deploy using the following batchfile:

Conclusion
Creating a custom web deploy package for an older version of Sitecore isn’t hard, it basically just needs the DACPAC files and a patchfile. This makes deploying to your on premises environments a blast and lets you use the same tooling and techniques for your on premise installation as well for azure.

Zero downtime deployments with Sitecore on Azure

$
0
0

From a business perspective, downtime is not desirable, ever. And if possible, you want to deploy as often as possible, even multiple times a day Maybe even 50 times a day for complex multi-instance  environments. And if there would be any downtime, that should be during nights, as most visitors would be asleep at that time. From a technical perspective, deployments should occur during business hours: all the developers and administrators are working during these hours, thus issues (if any) could be resolved as every engineer would be available.

We all know about this story, but how many organizations really implement this scenario? This blogpost will show what challenges exist when deploying web applications and how easy it is to implement zero downtime for Sitecore on Azure. The move to Azure not only opens up opportunities for automatic scaling (please make sure to watch his video as well!), but also offers possibilities for enhanced continuity! This blog post does not show off how to integrate with Visual Studio Team Services and Microsoft Release Manager, that will probably be a future topic. Don’t want to read? Watch this video!

As this blogposts is quite technical, here are some links of the basic fundamentals

Provisioning vs Deployment

Provisioning usually means that the infrastructure will be set up for a new application, including all dependencies. Deployment is the process of putting a new application, or a new version of an application onto a pre-provisioned environment. How does this compare to Azure Resource Manager (ARM) templates that Sitecore provided? The ARM template provisions the azure infrastructure and in these ARM templates the web deploy packages are referenced to deploy the first version of your application.

During a deployment lifecycle the infrastructure shouldn’t be reprovisioned, only the new application needs to be  deployed to the environment, while not re-deploying the databases, as it would reset all content to the initial state: all user generated content would be deleted.

This blogpost focuses on the part where provisioning stops: application deployment. The starting point is a provisioned Sitecore XP xM1 infrastructure (Content Delivery and Content Management) in Azure, with modified, user generated content:

Deploying an Application – Potential Issues

Deploying an application from Visual Studio is not too hard, see this blogpost by Johannes Zuilstra on this subject. However, a repeatable and stable deployment within your continuous delivery/deployment process could be another story. What if you are deleting files from your solution? Deploying Web deploy packages does not retract these files: it a) removes the complete website root or, b) adds new files and alters existing files. So how should cases like these be handled? Especially when working with Unicorn, this is a problem. Another issue is the downtime issue: adding new content will trigger an application pool recycle which conflicts with the zero downtime requirement. It seems that we have two objectives:

  • A stable deployment
  • Deployment without downtime

Stable deployment – redeploy Sitecore and your application

At Achmea, we always completely redeploy our application: this means cleaning up the website root, re-deploy Sitecore and deploy the application on top of that fresh Sitecore installation. This way we make sure that no “old files” will remain and that we are always working with the latest version of our Sitecore baseline and application.

The same pattern can be used in the cloud:

  • Clean the website root
  • Deploy a Sitecore web deploy package
  • Deploy your application

There is, however, a big disadvantage when using the default Sitecore web deployment packages: they deploy a new database including content (dacpac) and require a LOT of parameters, including the database connection strings, application insights keys and a lot of other parameters, which are usually provided by the ARM templates.

When this package will be deployed, all parameters need to be provided, otherwise the deployment will fail. But what is the value of the connectionstrings? What is the serviceURL of the cloud search parameter? And if these parameters are known and provided, then the deployment of this package will recreate the databases; something that shouldn’t happen as well. The user generated content (“Sitecore Experience platform – Modified content”) would be lost. The first step to work towards zero downtime, is to solve these issues.

Another way of Configuring Connection Strings in Azure

This is important information to work towards zero downtime deployments. In the current approach, the connectionstrings are stored in the connectionstrings.config, which means that they need to be redeployed for each deployment:

Thanks Rob Habraken for showing off this neat little trick! Navigate to Development tools\console to see what’s actually installed on your app service!

However, Azure offers an alternative way of storing connection string values as well! Navigate to the Application settings where an app settings and connection settings can be found. When entering connection strings here, these settings will be used over the settings specified in the Connectionstrings.config. They are persistent as well, so after a removal of all the content in your app service, these Connection strings are still valid. This also means that a new deployment of your application, without Connection strings, will still work. This also means that those parameters aren’t needed anymore at deployment time.

It’s also possible to enable “slot setting” – Slots will be explained later in this article.

Deploy the application – not the database

As seen in the parameters.xml, there are a lot of references to the dacpacs and custom SQL scripts, which all need input as well. As those databases already have been installed during the provisioning of the application, all those references can be removed.

Challenge + workaround – license.xml

The snippet above doesn’t show the license.xml – with a reason. Deploying the license.xml using the setparameters.xml, seems to be hard: I wasn’t able to include the licenseXML value yet, which means that another solution had to be found. For the time being a solution with a custom cargo payload containing the license will do, although it’s not a nice solution.

Create a new web deploy package for deployment

From the 22 initial parameters that were needed, 16 have been removed. The snippet below shows a stripped parameters.xml that can be used after the initial provisioning.

This parameters.xml only needs the necessary input for application deployments.

How those packages can be created is described in my previous blog posts::

Note: The dacpac files can’t be removed from the web deploy packages using the Sitecore Azure Toolkit – they are put in when the first package is created. For now, I didn’t investigate how to remove them easily – I left them in the package.

Deploy to Azure using PowerShell

As mentioned before, I saw some great blogposts on how to deploy your customizations to the cloud using Visual Studio. This is great during development time, but I wouldn’t recommend to do this when working towards continuous delivery. Aside from that you can’t deploy web deploy packages using visual Studio.  Christof Claessens provided a nice script on how to use web deploy from the commandline. This script had to be slightly modified to deploy the web deploy package, since this script builds the solution first, there was a problem with the contentPath used and I had to include the setparameters.xml to supply my 6 parameters. Below, my modified script can be found:

This will start the web deploy from the command line towards Azure. Let’s see what happens:

  • the web app becomes unresponsive and even may throw some errors.
  • After the deployment, the cache needs to be build up; there is no warmup of the pool
  • There are no connection strings specified in the ConnectionStrings.config:
  • The connectionstring values are default values, as they appear in the default installation files

  • After warmup, we do see a working website, with the changed content in place:

Objective 1: check – a working website, is accomplished, which still preserving the user generated content. No passwords, usernames or connection strings were provided, while the website still works. A new “vanilla” Sitecore web deploy package was created and deployed. The next step is to deploy a new application. This can be done in various ways

  • Deploy two packages: the vanilla Sitecore package using msdeploy and deploy your changes (application) using web deploy using visual studio. (I don’t recommend this from a continuous deployment perspective, but I will use this in my screencast and my blogpost)
  • Deploy two packages: the vanilla Sitecore web deploy package and an application web deploy package
  • Deploy one web deploy package: Use the Sitecore Azure toolkit to create a cargo payload for your application(s) and build a new web deploy package, including the Sitecore root.

Note: I am not sure what scenario I like the most, scenario 2 or 3. While developing/testing, scenario 2 might be the best as creating a full new package using the Sitecore toolkit takes quite some time , from a Continuous delivery perspective scenario 3 has a lot of advantages.

To “prove” that new applications can be deployed, a new solution has been created. Three changes have been made:

  • The default.css has slightly been modified: the contentTitle class got a red background.
  • The “sample item” template got an extra field: “note” (via Unicorn. The reason why this tool was used over TDS will become clear later in this blogpost)
  • The “sample rendering XSLT” has been modified to show the “note” content

These changes result in a few files that can be deployed using web deploy: the CSS, the XSLT and the serialized data in yml format. This results in the following, astonishing beautiful look & feel:

Note: Unicorn was already added in my Sitecore baseline (as a cargo payload, similar to the way I added Sitecore Powershell Extensions to my baseline in this post), so only the Unicorn configuration for my application had to be added to my project.

Deployment without downtime – meet the deployment slots!

The next objective is to deploy a new application without downtime and errors. In Azure, it’s possible to use “deployment slots“. These slots are separate web applications with their own configuration, their own website root and their own publishing profiles. They can be used to test your changes, before deploying the application to the production stage. As seen in the image below, a “staging” slot has already been created:

These slots provide the possibility of cloning the configuration sources (the app and connection strings). Remember the “enable/disable” slot settings? When enabled, they are specific to the slot they are configured in, when not the settings will be copied along with the application. In this example the settings are not slot specific, which means that those settings are clone: the same core, master and web databases will be used.

Note: When having slot specific settings, your application will get a cold start!

After testing the staging environment and the official acceptance by business owners, a “swap” can be executed. Both a direct swap and a swap with preview are possible:

“Swap” leads to a direct swap of web applications: your staging environment becomes production and vice versa.

“Swap with preview” does basically the same, but leaves a moment to cancel the swap before the slots get swapped (this is demonstrated in the video as well). It does exactly the same as a direct swap, but doesn’t switch load balancers yet.

Note: these actions can be triggered from Microsoft Release Manager as well, for a fully automated deployment!!

Note2: I learned from Rob Habraken that staging slots can freely be used in the Sitecore perpetual server based license model.

When the project has been deployed to the staging environment, the environment can be validated, tested and warmed up. After a swap, the new, provisioned application is available to everyone!

Objective 2: Zero downtime deployments – accomplished!

Convenient Roll back with Unicorn

Earlier in this article was stated that I wanted to use Unicorn. When rolling back deployments, it’s important to roll back the filesystem and the deployed applicative content. With Sitecore, this is only possible by

  • Restoring SQL backups for the core, master and web database – this causes downtime
  • Deploying a new Sitecore package – simply restoring the old package will probably not cut it

With Unicorn, the “old” situation is still available in the deployment slot (after swapping). By simply syncing that content and swapping the application back to production, the roll back is executed – no deployment of old web packages is involved in this process.

Conclusion

It’s not too hard to set up a small environment to deploy without downtime, and have a very convenient rollback scenario without downtime. However, there are some drawbacks, which can be overcome, but some work is needed

  • Sitecore offers ARM templates to provision Sitecore with an initial deployment
  • The web deploy packages that are offered by Sitecore cannot be used for redeployment, as they recreate the Sitecore databases. The parameters provided also require the use of parameters that should be provided once.
    • This leads to “initial deployment” or “provisioning” web deploy packages and “deployment” web deploy packages. This leads to extra work when altering baselines and: custom archive.xml’s, parameter.xml’s and configuration files.

Apart from the drawbacks, I am very enthusiastic on the possibilities for Sitecore on Azure: convenient ALM, scaling, zero downtime. This is definitely going to help towards a continuous deployment scenario. Happy Azureing (is that a word? ;))

PS: Special thanks to Rob Habraken, Christof Claessens, Pete Navarra, Sean Holmesby and George Chang for their reviews

Revealing Robbie at the Sitecore SUGCON 2017 – Windows IoT, Raspberry PI, Cognitive Services

$
0
0

Today, Rob Habraken, andI have launched our newly and secretly built project at SUGCON Europ 2017. Something that never has been done before: a real robot that moves, interacts, communicates and executes tasks, fully driven by Sitecore XP, using additional techniques like Artificial Intelligence, Machine Learning, Natural Language Processing, Face Recognition and Emotion Detection.

It is a robot whose behaviour is fully controlled by Sitecore xDB!
Why? To show the power of Sitecore’s marketing suite. To prove that Sitecore’s omni-channel capabilities go beyond existing channels. And to discover the boundaries of this technology. Experiment with the new Microsoft Cognitive Services. And just because it’s awesome!

Besides the cool appearance of the robot, Robbie, there’s something in the software Rob and Bas built that is ground breaking and a useful innovation: Robbie can detect and recognize multiple persons within its field of vision, and run multiple xDB sessions simultaneously, being able to interact with multiple individuals at the same time, while personalizing behaviour towards all of them individually. From one device.
In the upcoming months, Rob and Bas will release multiple blog posts going deeper into the technology behind this project. They will demo Robbie on other events as well, and eventually release all code that drives this awesome project!
Want to see more? Keep an eye out on the #sugcon Twitter channel or our website, as soon the recordings of this session will be shared!

 


Sitecore Profiling and tracing without an administrator or developer role

$
0
0

When working on Sitecore projects, there will popup some situations where you want to indicate performance issues. The out of the box capabilities are great, but the require a development role or and administrator account. While this might work in a lot of situations, there are situations where this just isn’t possible. For example, when having one ore more (virtual) extranet users which don’t have the Sitecore developer role and whose identities are needed to make backend calls. Performance issues might appear in those backend calls, but it may be hard to indicate where those performance sinks are located. That’s why I created a solution where the out of the box profiling and tracing options can be used, for any user.

The solution is available on github

The original Sitecore solution contains some hardcoded logic which checks for the admin or a developer role. This is a role that we won’t assign to any extranet-user, ever, not even temporary. The code below is causing this inconvenience and I would recommend Sitecore to make this configurable in the near future 😉

That’s why we had to come up with another solution. In this specific solution, the default logic can be overridden by enabling a patch-file. After the patchfile has been overridden, the default querystring querystring parameters for enabling and disabling profiling and tracing can be used. This causes the out of the box functionality to continue to work, while we are able to have this kind of insight for any user and role.

I would recommend to disable this functionality file whenever possible and only enable it whenever this is needed: an new requirement would be to make this configurable through settings in Sitecore and get rid of the patch-files. This is on my (very long) to-do list 😉.

A new Sitecore version has shipped, save your ass(emblies)

$
0
0

With the release of Sitecore 9.0 at the Sitecore Symposium, a lot has changed. We saw very interesting improvements on Marketing automation, new products like xConnect and Cortex and the new installation framework. With the new release, new assemblies and versions will be shipped, while others are removed. It’s important to update your references to these new versions in your projects, so you won’t overwite the shipped assemblies with other versions. I made a quick overview on what has changed:

  • Which assemblies have been removed?
  • Which assemblies have been added?
  • Which assemblies had a version change?
  • What are the most important observations?

Using Microsoft Excel and its Powerquery I compared the Sitecore 8.2 update 5 assemblylist with the Sitecore 9 version and created a list of all assemblies and checked wether or not they were added, removed, had a version change or nothing happened.

My most important observations:

  • All Social Connected assemblies got removed. Of course I didn’t read the release notes, but it’s in there: it got removed until futher notice:
  • HTMLAgility pack got downgraded from 1.4.6.0 to 1.4.9.5
  • Newtonsoft got finally updated to version 9.0.1
  • Microsoft Entlib got added

This list can be downloaded here.

 

 

Enable federated authentication and configure Auth0 as an identity provider in Sitecore 9.0

$
0
0

Sitecore 9.0 has shipped and one of the new features of this new release is the addition of a federated authentication module. I wrote a module for Sitecore 8.2 in the past (How to add support for Federated Authentication and claims using OWIN), which only added federated authentication options for visitors. Backend functionality was a lot harder to integrate, but I am glad that Sitecore took the challenge and solved it for both the front- and backend. It means that I can get rid of the old code and finally can use the out of the box solution provided by Sitecore. They created a very pluggable solution which can basically register any kind of authentication module via the OWIN middleware. This blogpost will show how I integrated the Identity broker Auth0 with Sitecore. Auth0 is a platform which can act as an Identity Broker: it offers solutions to connect multiple identity providers via a single connection. Code is available at my github repository:

PS: in this example I use Auth0 as Identity broker for Facebook and Google. It’s of course possible to connect directly to Google and Facebook, I just chose not to do this.

Enable federated authentication

At first sight, getting federated authentication in the Sitecore context to work looks a bit complex, but in the end, it’s just a bit of configuration, a few lines of code and configuring the OWIN middleware. Martina Welander did a great job to document the steps to create your own provider, but some small examples always help, right?  In the end, you’ll end up with some extra login options, for example with this Auth0 variant:

Create an application in Auth0

Two connections have already been created for Facebook and Google, which can be used to authenticate via Auth0. They offer multiple different options, but for the sake of simplicity, I will stick to these. If you want to know how to configure these: the Auth0 documentation is outstanding!

To create a new provider for Sitecore, the first step would be to register a new client:

As we are integrating Auth0 with Sitecore, “Regular Web Application” should be chosen as client type.

After the client has been created, navigate to the settings tab. This overview will contain all information that is needed to configure the provider in Sitecore.

Take note of the ClientId, ClientSecret and domain. These will be needed in the Sitecore configuration to connect to the authentication endpoint. However, one setting has to be provided by the developer: the callback url has to be added. This will be <hostname> + “/signin-” + <identityprovidername>, This is https://xp0.sc/signin-auth0 in this example.

In the “Connections” – tab I already selected Facebook and Google as external Identity providers. Please take note that I also enabled another kind of login: Auth0 offers its own user database as well.

That’s all that was need to setup a new client.

Write the code

Coding is not too much of a hassle and is identical to how you would register middleware in a regular ASP.Net application. The difference is that a pipeline should be used in which the authentication middleware can be registered.

The IdentityProviderPipeline processor must inherit from the “IdentityProvidersProcessor“-class and return a unique IdentityProvidername. The overridden “ProcessCore” contains code to actually load the middleware. In a regular ASP.Net application, the OWIN middleware would have been registered in the startup class, but in this case the middleware needs to be registred in the pipeline. The ProcessCore functionparameter “IdentityProviderArgs” exposes the App property, which in fact has the IAppBuilder interface.

Adding the Middleware is business as usual: register the middleware and you’re good to go. Important to know is that the claims transformation must be executed explicitly after the user has been authenticated.

Wiring it all together

The last part is to configure the new identityprovider, which consists of a few steps:

  • Register the OWIN Authenticationprovider middleware pipeline
  • Define for which sites an identityprovider needs to registered
  • Define the identityprovider itself and configure the claim mappings

But just adding configuration isn’t enough. As this kind of authentication is completely different from the default authentication, federated authentication must be explicitly enabled.

Enable the federated authentication module

As the technique behind the authentication is completely different as opposed to the default authentication provider, Sitecore made the authenticationmanager injectable with an owin based version. To get it to work, enable the \Include\examples\ Sitecore.Owin.Authentication.Enabler.config patch-file. This patchfile will inject a different AuthanticationManager, which supports OWIN authentication modules.

 

Register the AuthenticationProvider middleware pipeline

This is basically one line of configuration; the pipeline which registers the middleware needs to be added here.

Define for which sites an identityprovider needs to registered

Within the federatedAuthentication node, the authentication providers need to be attached to the sites in which they can be used. This will make the authentication endpoint available to those sites.

Define the identityprovider itself

And last, but not least, the identity provider itself needs to be registered. In this section, the name of the provider will be registered, for what Sitecoredomain the provider will be registered and how claims should be transformed. In the included example, the role Sitecore\Developer will be added if the idp is equal to Auth0. On it’s turn, the role-claim “Sitecore\Developer” will be mapped to the Sitecore-role “Sitecore\Developer”. Although my advice would be to provide those roles within your Identity management solution, if possible, it’s a very welcome solution for  the cases when those are not available.

Bonus: Map user properties

As the Administrator role isn’t a real role, but more a Sitecore user property, this “role” needs to be set in a different way. The Propertyinitializer can be used to achieve this. First, it reads a claim (and its value) and if that claim has the defined value, the property will be set:

Conclusion

Sitecore did an awesome job on integrating federated authentication within Sitecore. All the OWIN authentication middleware that exists can be used without any modification and is easily integrated within Sitecore. A very flexible solution has been created which will make Sitecore again a little bit more mature.

Solr: Error creating SolrCore when using the Sitecore installation Framework

$
0
0

Today I experienced an error while installing Sitecore 9 using the Sitecore installation framework:

“Install-SitecoreConfiguration : Error CREATEing SolrCore ‘xp0_xdb’: Unable to create core [xp0_xdb] Caused by: null”

Setting the verbose logging option didn’t help and my tries to manually reproduce the issue didn’t work out as well; Or the core was successfully created or I got an error message that the core was already created.

It turned out that there was something wrong with the timing. In the sitecore-solr.json and xconnect-solr.json file a few tasks get executed to create/reset the cores:

  • StopSolr – this stops the windows service
  • Prepare cores – copies the basic config set to the directory which hosts the index
  • StartSolr – starts the windows service
  • CreateCores – tells Solr using a http request to actually create the core

In my case, the windows service was still starting, while the http request was executed, which caused the Sitecore installation framework to bug out. The strange part: With Solr 6.2.1 this did not happen, while it happened with Solr 6.6.1.

The solution is quite easy (and I expect that Sitecore had the same experience while developing SIF): the StartSolr task has a parameter named “PostDelay”, which initially has been set to 8000. I increased this to 20000 (just a lucky number) and all the errors were gone by the wind :D. See line 16 in the snippet below where I updated the value

 

Gotchas while installing Sitecore 9 using the Sitecore installation framework

$
0
0

Sitecore released a nice Installation framework to install Sitecore, xConnect and configure Solr. I used this framework already a few times (on a few machines and it turned out that I am very proficient in breaking things. Especially Sitecore 9). During this installation I faced some inconvenient issues (and found out some tips) which I wanted to share with you. This should help you getting up and running even faster!

Download the prerequisites and setup your resource directory

First, download the following files:

Create a directory c:\resourcefiles and extract the contents of the installation package to this folder. There should be a Sitecore scwdp.zip and an xConnect.scwdp.zip. Extract the content of “XP0 Configuration files rev.xxxxxx.zip” to the root of this directory as well. Last but not least: copy your license.xml. Make sure to unblock your xml and zip-files. Right click on each file and select “Properties”. Enable unblock and press OK.

Install the latest version of the Sitecore Installation Framework

Use the following commands to install the latest version of SIF:

When this doesn’t work, there might be a chance that you manually installed an older version. Remove it. It might be found it the <userdirectory>\WindowsPowerShell\Modules or in “C:\Program Files\WindowsPowerShell\Modules”

Install Solr, run as a windows service and setup https

The first prerequisite is to have Solr running over https. First, install Solr as you normally would, after the installation, you should visit this blog by Kam Figy as he wrote a nice script to setup https.

The Sitecore installation framework requires Solr to run as a windows service. When heading back from the Sitecore Symposium I tried to set this up, but didn’t get it to work. The trick: make sure to run solr as a foreground process: solr.cmd -f -p 8983. This blog helped me on how to set it up. They made use of a tool called NSSM

Enabled Contained database authentication

This is generally not a best practice, but xConnect requires the ability to login using a sql login. When you copy the query from the installation guide, all the commands are placed on a single line, which causes sql to bug out. Copy the sql query from the source below and you’re good to go!

Download the configuration files

Don’t. As the Sitecore Installation framework uses a set of configuration files to deploy an environment, Sitecore provided a set of configuration files. The Installation guide tells us to download them from https://dev.sitecore.net, but I spend like 20 minutes searching for them: they weren’t there. Turns out that they are part of the installation package.

Install Sitecore and xConnect (and repeat when this fails)

The next step would be to install Sitecore. Sitecore provides a nice installation script, but again: it gives some problems while copy-and-pasting it. This gist provides the same script, but is more easier to copy. Save it as c:\resourcefiles\install.ps1 When the Solr-task gets executed and gives a strange error, this might be due to the fact that the windows service hasn’t been started yet.

It might be possible that the installation doesn’t end successfully (due to some configuration errors). Just re-starting the installation will not work, as the framework tries to reinstall the databases. As manually deleting them isn’t fun, I always stop the two web applications (xp0.sc and xp0.xconnect) and run the sql script below:

it might be possible that the Marketing automation table cannot be deleted. I always delete this one manually, just make sure to tick the box “close existing connections”

Note to self: do not forget to run the post-installation steps

For some reason I always forget those. As xConnect will NOT work without those post-installation steps the script below really should be executed. As it came from the guide: when copy-pasting it from the guide, the query will bug out. Fire up your Sql management studio, create a new query and set the mode to SQLCMD.

 

Conclusion

Having an automated installation is great and I will definitely use this over SIM, as this guide takes care of a secure installation, setups solr and xConnect. However, there are some inconvenient issues which I just wrote down, I really hope it helps you to get up and running as soon as possible!

Viewing all 54 articles
Browse latest View live




Latest Images