Thursday, December 20, 2018

GitHub–Steps to merge a fork to its parent

3 simple steps: (this assumes you have setup a remote branch called upstream that points at the original repo)

  1. git fetch upstream
  2. git checkout master
  3. git merge upstream/master

And to setup a remote called upstream that points at the original repo:

git remote add upstream {url to the original repo}

for more info see: https://help.github.com/articles/syncing-a-fork/

Tuesday, December 18, 2018

Adding the Knowledge Base Search to the “Social Pane”

On the Case entity, the KB search is embedded as part of the Social Pane:

image

Now on a custom entity, after you have enabled it for Knowledge Management (done via Settings >> Service Management >> Embedded Knowledge Search), you have to add the KB control on to the form:

image

image

But, what if you wanted to add make it look like the Case entity, where the KB Search is part of the Social Pane? The “KB Search” tab does not show up as part of the properties of the Activities Tab.

image

The only way I have found to get it on the form is to manually edit the FormXML so that it shows up and then edit it via the form editor.

Here are the steps:

  1. Make sure you have the activities/social pane control on your form (if not, add it – its called the Time Line control).
  2. Give it a unique name (i called it “Timeline_1234”)
    image
  3. Create a new solution and add the entity to the solution.
  4. In the next screen, make sure you select only the form in question and uncheck “Include entity metadata” and “add all assets”
    image
  5. Dont include any required components (on the next screen).
  6. Export the solution as an unmanaged solution.
  7. Unzip the solution and open the “customizations.xml” file.
  8. Search for the control name (in this case it was “Timeline_1234”)
  9. It should look like this:

    <parameters>
         <UClientUniqueName>Timeline_1234</UClientUniqueName>
         <UClientModules>Activities,Notes</UClientModules>
         <UClientDefaultModuleForCreateExperience>Notes</UClientDefaultModuleForCreateExperience>
         <UClientShowFilterPane>true</UClientShowFilterPane>
         <UClientExpandFilterPane>false</UClientExpandFilterPane>
         <UClientOrderBy>descending</UClientOrderBy>
         <UClientRecordPerPage>10</UClientRecordPerPage>
         <UClientSortActivitiesByValue>modifiedon</UClientSortActivitiesByValue>
         <UClientDisplayActivityHeaderUsing>defaultformat</UClientDisplayActivityHeaderUsing>
         <UClientCreateActivityUsing>quickcreateform</UClientCreateActivityUsing>
         <UClientDisplayActivityUsing>defaultfields</UClientDisplayActivityUsing>
         <DefaultTabId>ActivitiesTab</DefaultTabId>
         <OrderByActivityWall>descending</OrderByActivityWall>
         <SortActivityWall>modifiedon</SortActivityWall>
         <EmailConversationView>true</EmailConversationView>
         <ShowArticleTab>false</ShowArticleTab>
         <SelectDefaultLanguage>00000000-0000-0000-0000-000000000000</SelectDefaultLanguage>
    </parameters>

  10. Update: ShowArticleTab to true
  11. Add the following:
    <FilterResults>3</FilterResults>
    <AllowChangingFiltersOnUI>false</AllowChangingFiltersOnUI>
    <ShowLanguageFilter>false</ShowLanguageFilter>
    <ShowDepartmentFilter>false</ShowDepartmentFilter>
    <EnableAutoSuggestions>false</EnableAutoSuggestions>
    <NumberOfResults>4</NumberOfResults>
  12. It should look like this:

    <parameters>
         <UClientUniqueName>Timeline_1234</UClientUniqueName>
         <UClientModules>Activities,Notes</UClientModules>
         <UClientDefaultModuleForCreateExperience>Notes</UClientDefaultModuleForCreateExperience>
         <UClientShowFilterPane>true</UClientShowFilterPane>
         <UClientExpandFilterPane>false</UClientExpandFilterPane>
         <UClientOrderBy>descending</UClientOrderBy>
         <UClientRecordPerPage>10</UClientRecordPerPage>
         <UClientSortActivitiesByValue>modifiedon</UClientSortActivitiesByValue>
         <UClientDisplayActivityHeaderUsing>defaultformat</UClientDisplayActivityHeaderUsing>
         <UClientCreateActivityUsing>quickcreateform</UClientCreateActivityUsing>
         <UClientDisplayActivityUsing>defaultfields</UClientDisplayActivityUsing>
         <DefaultTabId>ActivitiesTab</DefaultTabId>
         <OrderByActivityWall>descending</OrderByActivityWall>
         <SortActivityWall>modifiedon</SortActivityWall>
         <EmailConversationView>true</EmailConversationView>
         <ShowArticleTab>true</ShowArticleTab>
         <SelectDefaultLanguage>00000000-0000-0000-0000-000000000000</SelectDefaultLanguage>
         <FilterResults>3</FilterResults>
         <AllowChangingFiltersOnUI>false</AllowChangingFiltersOnUI>
         <ShowLanguageFilter>false</ShowLanguageFilter>
         <ShowDepartmentFilter>false</ShowDepartmentFilter>
         <EnableAutoSuggestions>false</EnableAutoSuggestions>
         <NumberOfResults>4</NumberOfResults>

    </parameters>

  13. Zip the files back up together.
  14. Import the solution and publish it.
  15. Open the form up in Form Editor and open the properties for the activities control.
    You should be able to set the properties for the KB search now:
    image

Thursday, November 15, 2018

Azure Website–Allowing different file types for download

An azure website out of the box will not allow you to download static content files such as JSON, MP4, etc. If you need to enable this feature, you need to add a web.config file in the site/wwwroot folder with the following contents

<?xml version="1.0"?>
<configuration>
     <system.webServer>
         <staticContent>
             <!--following needed only if extensions were defined at a different level -->
             <!-- <remove  fileExtension=".json" /> -->
             <!-- for a good list of other mimetypes see:
https://www.sitepoint.com/mime-types-complete-list/ -->
             <mimeMap fileExtension="json" mimeType="application/json" />
             <mimeMap fileExtension="mp4" mimeType="video/mp4" />
             <mimeMap fileExtension="ogg" mimeType="audio/ogg" />
             <mimeMap fileExtension="m4a" mimeType="audio/mp4" />
             <mimeMap fileExtension="flv" mimeType="video/x-flv" />
             <mimeMap fileExtension="woff" mimeType="application/font-woff" />
             <mimeMap fileExtension="woff2" mimeType="application/font-woff2" />
             <mimeMap fileExtension="ttf" mimeType="application/font-ttf" />
             <mimeMap fileExtension="csv" mimeType="text/plain" />
      </staticContent>
     </system.webServer>
</configuration>

You can pick and choose the values you need for fileextensions.

Wednesday, November 14, 2018

Dynamics CRM Plugin Registration Tool Log File Location

The Plugin Registration Tool stores its log files at: “C:\Users\{user}\AppData\Roaming\Microsoft\Microsoft Dynamics365© Plug-in Registration Tool\”

You can get to it via this shortcut: %appdata%\Microsoft\Microsoft Dynamics365© Plug-in Registration Tool\

And while we are at it, the path to XRM Toolbox’s logs is: %appdata%\MscrmTools\XrmToolBox\Logs\

Thursday, November 01, 2018

Currencies and Strong vs Weak terminology

Note to self:

1. When a currency is labelled as being strong, it means that you get more of the other currency against the currency (eg: US$ is stronger than the Indian Rupee, that would mean you get more rupees per US$).

2. When a currency is labelled as being weak, it means that you get fewer of the other currency against the currency (eg: US$ is weaker than the Indian rupee, that would mean you get fewer rupees per US$).

So in this chart, the US$ has become stronger when compared to the INR (because you getting more INR per $ (INR 63.2/$) today than 52 weeks ago (INR 74.4/$)).

image

When a currency is stronger than its counterpart, different people benefit: The weaker sides exporters are happier as their exports are cheaper and the stronger sides exporters are less happy as their exports




US Exporter

US Importer

US Currency Holder

Indian Exporter

Indian Importer

Indian Currency Holder

US$ stronger/INR weaker

Unhappy
Exports cost more

Happy
Imports cost less

Happy
More INR per $

Happy
Exports cost less

Unhappy
Imports cost less

Unhappy
Fewer $ per INR

US$ weaker/INR stronger

Happy
Exports cost less

Unhappy
Imports cost more

Unhappy
Fewer INR per $

Unhappy
Exports cost more

Happy
Imports cost less

Happy
More $ per INR

Examples

Caterpillar, Boeing, US Software companies

Walmart, World Market

American tourist to India

Indian Software companies, Indian manufacturers

International companies that manufacture their goods elsewhere and import to India, Indian resellers

Indian tourist/student travelling to US

Thursday, October 18, 2018

Cannot convert the literal 'undefined' to the expected type 'Edm.Guid'

If you get this error when working in Javascript with DynamicsCrm (or any system that uses Odata), and you get this error, then you most likely need to setup your object like this:

var request = {

    ObjectId: {guid: "E8C656B7-6AD1-E811-A967-000D3A30D5DB"}

};

Otherwise, the request that will get sent will have undefined value and the response will have an error with the following message:

message=An error occurred while validating input parameters: Microsoft.OData.ODataException: Cannot convert the literal 'undefined' to the expected type 'Edm.Guid'. ---> System.FormatException: Guid should contain 32 digits with 4 dashes (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).

Tuesday, October 16, 2018

Dynamics CRM–Business Rules or Workflows?

When should you use a business rule (BR) and when should you use workflows?

First thing we should understand are that business rules are just another type of a process (just like Workflows and Business Process Flows). You can validate this by performing an advanced find and setting the category to Business Rule. Other options for category are:

image

Now when you create a business rule, there are 3 options (2 really), one that applies it to forms (all or specific) and the other that applies it to the entity. When you choose entity and create your BR, and activate it, you will find 2 entries in processes with the same name. One of these handles the javascript part (that gets run client side). The other is a synchronous workflow that runs on create and on update of the fields that are in conditionals of the business rule. You can look at the JavaScript that will run by looking at the ClientData field on Process. Also, the XAML field contains workflow.

Why is this important to know? Here is why:

1. If you have a business rule that needs to be run client side, then Business Rules are your best option. This is especially true if you want to show error messages or perform data validation.

2. But if you are going to have an entity level business rule and it doesn't really contain validation or error messages, then you have a choice: If you use a Workflow, you can decide to run it in the background (asynchronously).

Some other things to consider:

Business Rules have a nicer UI. But, every business rule that fires at the entity level creates a new synchronous workflow that fires on create and on update of a single field. If you use a workflow, you have more choices: you can create a single workflow that runs on Create, that does all the things that need to be done on create, this in my opinion is more efficient, then having multiple workflows that are all firing one after the other.

Also, look at the limitations of BR as defined in microsoft docs: https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/admins-customizers-dynamics-365/mt826761(v=crm.8)#limitations

Inner workings of Business Rule javascript:

Finally, if you are curious about the inner workings of business rules, see the code at: https://github.com/rajrao/CRM-Tools/tree/master/JavaScript/BusinessRules

But basically here are the main things to know:

The javascript for the business rule is injected into the formscript for the form.
The function itself is setup to run on change of the fields that are in the conditionals.
The function is run once at form startup.

Wednesday, October 03, 2018

ODataException: An undeclared property 'aaa_xxxxxid' which only has property annotations in the payload but no property value was found in the payload

If you get the error: then know this: case matters and the field name is the schema name of the field!

An error occurred while validating input parameters: Microsoft.OData.ODataException: An undeclared property 'aaa_xxxxxid' which only has property annotations in the payload but no property value was found in the payload. In OData, only declared navigation properties and declared named streams can be represented as properties without values.

Please note that the field names are case sensitive and go by the “schema name”. In my case, the schema name was ‘aaa_XxxxxId’. Once that was fixed the code began working!

var entity = {};
entity["customerid_account@odata.bind"] = "/accounts(21876381-dc67-e811-a954-000d3a378ca2)";
entity["aaa_XxxxxId@odata.bind"] = "/aaa_xxxxx(c9edf6c0-d670-e811-a958-000d3a3780dd)";
entity.caseorigincode = 1;
entity["msdyn_incidenttype@odata.bind"] = "/msdyn_incidenttypes(6760f919-77b0-e811-a95c-000d3a378f36)";

Tuesday, October 02, 2018

Creating and Using Tokens with Azure Service Bus

The following notes are with regards to the “.net standard” Microsoft.Azure.ServiceBus library and details what each parameter is (because I found the documentation lacking in this regard).

Creating a token

Tokens are created using a TokenProvider. There are many options in instantiating a token provider and then one I am showing here utilizes the Shared Access Signature (SAS). But, if you are running your code in Azure, a better option would be to use the ManagedServiceIdentityProvider, as one doesnt need to share keys, etc.

To create a token using the “CreateSharedAccessSignatureTokenProvider(string keyName, string sharedAccessKey)”, you need a key name and its key. This is retrieved from the “Shared access policies” blade in Azure. Once you have these 2 values, you can create a tokenProvider.

TokenProvider tokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(keyName, sasKey);

The CreateSharedAccessSignatureTokenProvider has overloads, in which you can specify the TTL of the token and the scope (namespace vs entity).

To create the token, you need to use the “GetTokenAsync(string appliesTo, TimeSpan timeout)” on the tokenProvider. The “appliesto” defines the either the namespace or the entity and should be defined as follows:

For access to the entire namespace: testservicebusinstance.servicebus.windows.net
For access to an entity in the namespace called myFirstQueue: testservicebusinstance.servicebus.windows.net/myFirstQueue

Using the token

The GetTokenAsync method returns a token. The token contains a property called TokenValue. You use this TokenValue to work with the servicebus.

var tokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(token.TokenValue);
var client = new QueueClient("testservicebusinstance.servicebus.windows.net", "myFirstQueue", tokenProvider);
.
.
.
await client.SendAsync(message);

Controlling of access to the service bus:

There are 2 things that are possible:
1. restrict scope to the entity
2. specify if the token can manage,send or listen

The first one, I showed you in “creating a token”. The second one is not so obvious and is controlled by the Claims on the SaS policy that you are using (the keyName and sasKey for which you pulled from the SAS Policies blade in Azure).

When to use tokens:

Say you have a lot of different micro-services and they all need to get access the ServiceBus. Also, you dont want to give each one a key, as you now will have to figure out what to do if the key got compromised. Also, you dont want all services to be able to do anything and everything on the ServiceBus. In this case, you would need to stand up a token provider service. The microservices, would then authenticate with the token service and the token service would then determine what the microservice is authorized to do and return a token that has the appropriate claims and scope.

Best practices:

As you can see with the SharedAccessSignature option, you still need to provide your token service the sas-key. Instead, with the release of managed service identities support on ASB, one can do this without the sas-key. So, if your service is going to be hosted in Azure, then MSI is definitely the way to go.

Wednesday, September 26, 2018

LogicApp–HttpRequest trigger: retrieving query parameters

With a Http request based trigger, how do you retrieve the query parameter values (eg: &queryParamName=hello%20world)? I did not find an answer easily on the internet, so here it is:

@triggerOutputs()['queries']['queryParamName']

image

The above example, will retrieve the query param value for a query param named “queryParamName” and return it as a response.

Another way to achieve the same thing, but in a simpler manner is to use the RelativePath:

image

When you add the relative path, you will have to get the updated HTTP Get URL (that gets generated for you) and in the URL you will find

invoke/queryParamName/{queryParamNameVar}

To call the logic-app, all you have to do is replace the bolded section with a value (invoke/queryParamName/hello%20world/.

I prefer using the relative-path method, because you will find that the logicApp designer will make the variable (in this case queryParamName), easily available to other actions, which is not just nicer, but also protects you from typos.

Wednesday, September 12, 2018

Databricks–Using Databricks Secrets

Here are all the steps needed to setup a secret in databricks (not key-vault) in Databricks. This works with the standard version of DB:

Install Databricks CLI:
1. install databricks cli (needs python)
     pip install databricks-cli
2. Setup databricks token (needs token from user-settings in Databricks. Also needs the host url: eg Host: https://northcentralus.azuredatabricks.net)
     databricks configure --token
3. Create a scope
     databricks secrets create-scope --scope dbSecretsScope --initial-manage-principal "users"
4. Add a key-value to scope
     databricks secrets put --scope dbSecretsScope --key mydbPassword --string-value myPasswordValue
5. List keys in scope
     databricks secrets list --scope dbSecretsScope


Create a Python notebook and add the following code in a cell and run it


sqlserver = xxxxx.database.windows.net'
port = '1433'
database = 'myFirstDb'
user = 'iamspecial'
pswd = dbutils.secrets.get("dbSecretsScope","mydbPassword")
print "password is", pswd #will display redacted – very important!
table = 'myFirstTable'

## Load Data Frame ##
df1 = spark.read \
   .option('user', user) \
   .option('password', pswd) \
   .jdbc('jdbc:sqlserver://' + sqlserver + ':' + port + ';database=' + database, table)

df1 #to display the dataframe to make sure we connected

Tuesday, September 04, 2018

Dynamics Error: Only owner can revoke access to the owner.

Error: Only owner can revoke access to the owner. CallerId: xxxxxx, OwnerId: yyyyyyy

I was getting this error when attempting to revoke access from yyyyyy via code on an account entity. What I found out was that while yyyyyy was the owner of the entity, I could successfully revoke access from the entity.

Instead, I first had to assign the entity to the new owner (xxxxxx) and only then could I revoke access from yyyyyy.

Tuesday, August 21, 2018

Dynamics CRM–debugging when using executionContext

Xrm.Page is being deprecated and one needs to pass the executionContext to the javascript function to get at the data that was available from Xrm.Page. But, how do you test your function from the browser’s debbuger?

What I do is I create my own executionContext variable and then pass it to my js function to see how it would work.

fakeExecutionContext = {
     getFormContext: function(){return Xrm.Page;},
     context: Xrm.Page.context
}

and you would call it like this:

myTestDynamicsFunction(fakeExecutionContext);

Thursday, August 16, 2018

Azure PowerShell Error: Please provide a valid tenant or a valid subscription

You might get the “Please provide a valid tenant or a valid subscription” error when running some Azure powershell commands (I got it when I ran “Get-AzSKAzureServicesSecurityStatus”).

The first thing to make sure is that you have at least version 5 of PowerShell. You can find this by running “$PSVersionTable” and making sure your PSVersion value is 5.0 or greater.

In my case, what I found out was the reason I was getting this error was that the subscriptionid I was providing was not the same one as the account that I was logged in on the computer. So, what I had to do was to log in to Azure using “Login-AzureRmAccount”. Once I had done that, running the command worked flawlessly.

Thursday, August 02, 2018

Azure Logic Apps–Sql Server Triggers being skipped

I was trying to use the Logic Apps “Sql Server” trigger (When an item is created), and the trigger was always being skipped.

After much testing and looking, I found the following requirements for a SQL Server trigger: https://flow.microsoft.com/en-us/blog/introducing-triggers-in-the-sql-connector/

Limitations

The triggers do have the following limitations:

  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger


A table with the following definition will trigger LogicApps, when data is added (or updated, based on the type of trigger you use):

CREATE TABLE [dbo].[Test2](
     [Id] [int] IDENTITY(1,1) NOT NULL,
     [RowVersion] [timestamp] NOT NULL,
     [Message1] [nvarchar](50) NOT NULL,
     [Message2] [nvarchar](50) NOT NULL,
  CONSTRAINT [PK_Test2] PRIMARY KEY CLUSTERED
(
     [Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

Its unfortunate that LogicApps works without any errors, when you dont have an identity and a rowversion column defined on your table. It would have saved me some time if it had!

Monday, July 30, 2018

Icons for Dynamics CRM

FlatIcons.net is a great resource for free icons for Dynamics CRM.

I have found the following settings work best: (you need a 16x16 and 32x32 icon and I use 0 padding on both).

The icon color is set to 656565.

image

And here is an example of how it looks

image

Monday, July 23, 2018

Dynamics CRM–My Apps link is gone. Fixing access to apps

Not sure if this is a v9.0 issue or not, but I dont have the “My Apps” link, which is needed to manage applications available to your organization (specifically, I needed to manage roles to hide some of the apps from users).

image

To fix this:

  1. Open the default site map:
    1. Go to Customizations >> Customize the System
    2. Client Extensions
    3. Open the “Site Map”
  2. Add “My Apps” link
    1. Select the Settings area.
    2. Drag a “Sub Area” onto one of the groups (I added mine under “Customization”)
    3. Set the following values:
      1. Type: URL
      2. URL: /tools/AppModuleContainer/applandingtilepage.aspx
      3. Title: My Apps
    4. Save and Publish.

You should now have a link to My Apps:

image

And you should be able to fix the roles via:

image

Friday, July 06, 2018

SLAs behind the scenes

To understand how SLAs work, its good to follow along how things get setup on a new entity that is enabled for tracking SLAs.

image

  1. First lets go over the 3 entities that define the SLA:
    1. SLA: This is the main SLA record. This entity records whether its the default SLA, to which entity it applies, etc. In addition it has a collection of 1 or more SLA Items:
    2. SLA Item: The SLA item contain information as to what criteria should be used to apply the SLA Item (each of which can define a different criteria under which they apply (Applicable When) and each of which may have a different set of time limits and actions. The SLA item also tracks which field on the entity it is supposed to update (this is used by the SLA KPI instance). In the above case, the SLA Item would be pointed the new_resolvebykpi. One could add more than one SLA KPI field.
    3. SLA KPI Instance: When the entity gets created, a SLA KPI instance is created to track the SLA. This SLA gets the details based on the SLA and SLA-Item records at the time of the entity creation (an important thing to remember, as changes dont get applied retro-actively to existing SLA KPI instances). A separate KPI instance will be created for each KPI being tracked on the entity.
  2. When you enable an entity for SLA: (Red boxes)
    1. The entity gets a lookup field created called SLAID, that points to the SLA
    2. In addition, on the SLA KPI Instance entity, a new N:1 field is setup that allows the SLA KPI Instance to point at the entity on which the SLA is being tracked.
  3. When you add a KPI field (you do this by setting it up as a lookup to SLA KPI Instance) (green box)
    1. The new field allows CRM to track each KPI separately.
  4. Workflows that manage the SLAs:
    1. Once the SLAs have been created and a new entity is created on which an SLA is being tracked, the work-flows fire and setup the fields on the SLA KPI Instance records based on the SLA details.
    2. These workflows can be found by performing the following advanced find:
      image
    3. Something to note: You will find multiple work flows when you run your search. If you sort it by status, lets first go over the workflows that are in activated status:
      1. There is one workflow that will have the name of your SLA. This is the main workflow and is invoked by the system and creates the SLA KPI Instance records. It also updates the SLA KPI Instances based on pause statuses, etc.
      2. There is a workflow for each of the SLA items you created. These workflows have wait steps that allow the workflow to track the Warning timeout and failure timeout values. These workflows look basically like this:
        image
        These workflows run in the background and hence there can be timing issues and the UI doesnt always update when the timers run out. This is the same reason that when you perform a resolution action (Such as setting a boolean field like First Response Sent), it doesnt update the UI immediately and you sometimes have to refresh the UI to see the updated status.
      3. Finally, you will find that there are some workflows that are in a draft status. Everytime, the SLA is deactivated and reactivated, it creates a new copy of the Workflows and deactivates the old ones. This does clutter the system and I have not found a way to clean up these old workflows. My best guess is that these workflows are kept around for historical purposes. I am not sure if a background process comes along and deletes these workflows later.

Enabling an entity for SLAs

For a quick intro, first read: http://blog.aggregatedintelligence.com/2018/07/creating-enhanced-sla.html

Enhanced SLAs allow you to track SLAs on any entity. For this example we will use the “Workflow” entity (which behind the scenes is pretty much a custom entity).

  1. Enable the entity to track SLAs:
    1. Go to customizations and then open the General tab for the custom entity you wish to work with (in this case, I am doing it for “Work Order”).
      1. Doing this adds 2 new fields on the entity: slaid and salinvokeid
  2. Add a KPI field:
    1. Now that entity is enable for KPI, you need to add a KPI Instance field. Lets add one to track the resolved by KPI.
    2. Under “fields” Click on New.
    3. Set the following values:
      1. Display Name: Resolve By KPI
      2. Name: xxx_ResolveByKPI
      3. Data Type: Lookup
      4. Target Record Type: SLA KPI Instance
    4. Click Save and Close
  3. Publish the changes
  4. Setup an SLA
    1. Go to Settings >> Service Management >> Service Level Agreements
    2. Click on New. Under entities, you should see your custom entity (in this case Work Order). If not, check step 1.
    3. Click Ok. Leave the defaults on the next screen and click “Save”
    4. Add a new SLA Item by clicking “+” under “SLA Details”
    5. Set the following values:
      1. Enter name: SLA Resolve By – Default
      2. SLA KPI: Resolve By KPI (If you dont have anything in the drop down, check step (2)).
      3. Applicable When: leave empty
      4. Success Criteria: I am tailoring this for Work Order, but you could simply choose Status: Inactive, etc
        Work Order >> System Status >> Equals >> Open – Completed; Closed – Posted
        image
    6. Setup SLA Item Failure and Item Warning values (I used: 3 days and 1 day respectively).
    7. Click Save and Close.
    8. Activate the SLA and set it as default.
  5. Add the SLA KPI quick view to the custom entity form.
    1. Open the custom entity form (in this case Work Order).
    2. Insert a “Quick View Form”
      image
    3. Save and publish

Testing:

  1. Create a new record for your custom entity (in this case, I created a work-order)
  2. You should see your SLA details:
    image
    And after the System Status was set to Open – Completed:
    image
    Note: You will likely have to refresh for the timer to go away, as there seems to be a bug or a timing issue with UI (the data itself is all correct).

Notes:

As of this blog post, here are some limits you need to be aware of (and they only apply to the online CRM instance):

  • The maximum number of entities that can have active SLAs for your organization is 7.
  • The maximum number of SLA KPIs allowed per entity for your organization is 5.

Creating an Enhanced SLA

For this example, we will create an SLA on the Case entity to measure the first response sent KPI. We will set it up to such that we expect the First response to be sent within 15 minutes of the case being created and a warning will be displayed if the first response has not been sent within 5 minutes.

  1. Create an SLA Record
    1. Settings >> Service Management
    2. New
      image
    3. Leave the default entries and click “Save”
      image
  2. Click the “+” under “SLA Details” to create a new SLA Item
    1. Enter a name: “First Response”
    2. Applicable When: Leave empty
    3. Success Criteria: This is the field that will be checked to see if this SLA item is considered a success or not:
      image
    4. SLA item failure: This is the time after which this SLA is considered as having been failed.
    5. SLA item warning: this is the time after which the SLA will be shown as being close to failure state.
      image
    6. Note: After you save, you will have the ability to define actions that should be undertaken if the success, failure, or warning states have been achieved. This can be used to send an email, etc.
    7. Click Save
  3. Activate the SLA and then set it as Default.
    image
  4. Create a quick view to show the SLA details
    Now that you have create the SLA, you need to be able to display it to the user. For enhanced SLAs, you do this using a quick view.
    1. Open up the default solution (Customizations)
    2. Under entities, navigate to “SLA KPI Instance”
    3. Under Forms, select, “Create a Quick View Form”
    4. Chose “Insert” tab and then click “Timer”
    5. Enter the following details:
      image
  5. Save and then Publish the Form.
  6. Your final step is to include the Quick View on to the Case form
    1. In the same customizations window, go to Entities >> Case >> Forms.
    2. Open your Case form.
    3. Make sure the “First Response Sent” field is on the form.
    4. Next, insert a quick view form with the following info:
      image
    5. Save and Publish the Form.

Testing the changes:

  1. Create a case record.
  2. It should look like this:
    image
  3. After you set “First Response Sent” to Yes and Save, the view should look like this:
    image
    Note: If the count down timer does not automatically stop and show success, then try and refresh the page, as I think there maybe a bug in the display code that doesnt let it update right after the save.

Dynamics CRM–Finding Workflows connected to SLAs

You need to setup your advanced find like so:

image

<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
   <entity name="workflow">
     <attribute name="workflowid" />
     <attribute name="name" />
     <attribute name="category" />
     <attribute name="primaryentity" />
     <attribute name="statecode" />
     <attribute name="createdon" />
     <attribute name="ownerid" />
     <attribute name="owningbusinessunit" />
     <attribute name="type" />
     <attribute name="rendererobjecttypecode" />
     <order attribute="name" descending="false" />
     <filter type="and">
       <condition attribute="category" operator="eq" value="0" />
       <condition attribute="rendererobjecttypecode" operator="in">
         <value>9750</value>
         <value>9751</value>
         <value>9752</value>
       </condition>
     </filter>
   </entity>
</fetch>

Monday, June 25, 2018

Opening a Dynamics CRM form directly

Sometimes when opening an entity record in Dynamics CRM, you get the error “Record is Unavailable” and in the log file it contains the following error: “Microsoft.Crm.CrmException: systemform With Id = xxxxxxxxxxx not found”. I am not sure why exactly Dynamics CRM gets into this state and some of the solutions online are to clear your browser cache, etc.

In my case, the only way I was able to get the form to open was to get a link that forced a different form to be opened. You can do this one of 2 ways:

1. Have someone open an entity that you are having issues with and have them email you the link, using the Email Link functionality in the ribbon bar. Or

2. Use the following link and fill in the highlighted sections and enter it in the browser:

https://[dyanmicsCrmUrl]/main.aspx?pagetype=entityrecord&etn=[entityNameHere]&extraqs=formid%3d[GUIDHere]

And here is an example of how that might look:

https://myorgname.crm.dynamics.com/main.aspx?pagetype=entityrecord&etn=account&extraqs=formid%3d6da2bf43-566a-4622-9c06-f350ac6ebf67

Thursday, June 14, 2018

Set-AzureRmAppServicePlan throws “Long running operation failed with status 'Conflict'” error

I was trying to go from a standard tier to the free tier and I got a conflict error.

There are many reasons that you might get this error and they are typically related to some functionality not available in the free tier that you enabled in paid tiers. To figure out why, go into your App Service Plan and look at your activity log. You should an entry for operation name: “Update hosting plan” with a status of failed. If you look at the “Json” and search for “statusMessage”, you should be able to figure out why.

In my case the error was caused by the fact that I was using a hybrid connection.

"Cannot change to the target SKU 'Free' because the Hybrid Connections count will exceed the new limit of '0'\"

Wednesday, June 13, 2018

Connect to Azure RM using a Service Principal

$applicationId = "Guid"
$tenantId = "Guid"
$subscriptiondId = "Guid"
$sharedSecret = "SharedSecretKey"

#convert to securestring
$secpasswd = ConvertTo-SecureString $sharedSecret -AsPlainText –Force

#create credential object
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $applicationId, $secpasswd

#connect to Azure
Connect-AzureRmAccount -ServicePrincipal -Credential $credential -TenantId $tenantId -SubscriptionId $subscriptionId

#test it!
Get-AzureRmResourceGroup

Tuesday, May 22, 2018

Creating an Azure Service Principal for Visual Studio

When you need to deploy from VSTS (VisualStudio.com) either via a build or via the release pipeline, you need a service principal that allows VSTS to access your Azure subscription. When the Azure subscription is connected directly to your VSTS account, you can use the simple version of the “New Service Endpoint” dialog. But, if the VSTS or Azure subscriptions are disconnected, then you need to use the full version of the “New Service Endpoint” dialog. This post goes into how you can manually do that:

What you need on the Azure side:

  1. Azure Active Directory instance. If you dont have one, create it. The free tier will do.
    1. After you create the AAD instance, open the AAD resource and select “Properties”. Copy the value for “Directory Id”.
  2. Make sure “Microsoft Visual Studio Team Services” is registered as an Enterprise Application:
    1. Go to Azure Active Directory >> Enterprise Application.
    2. If “Microsoft Visual Studio Team Services” does not appear in the list of “All Applications”, click on “New Application” and then search for “Microsoft Visual Studio Team Services” and add it. This will add “Microsoft Visual Studio Team Services (Microsoft Visual Studio Online)” to the list of APIs you can authorize your SP to have access to (used below in step 3-8)
  3. The Service Principal is created by creating an “App Registration”.
    1. Click on “App Registrations” and then “New Application Registration”.
    2. Enter a name for the App Registration. You need to remember this name.
    3. Leave type as “web app/API”
    4. Enter a sign-on URL (it just needs to start with http:// or https://. Doesnt matter and can be any fake URL).
    5. Open the newly created “App Registration”
    6. Copy the “Application ID” value.
    7. Click on settings.
    8. Click on Required permissions and then, “Add”
      1. Select “Microsoft Visual Studio Team Services (Microsoft Visual Studio Online)”.
      2. The permission should be “Have full access to Visual Studio Team Services REST APIs”.
    9. Click Save.
    10. Click on Key
      1. Enter anything for “Key Description”
      2. Duration can be “Never Expirer”
      3. When you save, you will see a string value in the “Value” box. Copy and save this value (this is the client secret).
  4. Provide the service principal you created in step (2) access to your subscription.
    1. Go to your subscription in Azure portal.
    2. Choose Access Control
    3. Click Add and you can provide the SP direct access as a contributor to your subscription. Alternatively:
    4. You can provide access just to a specific Resource Group, which you can do by opening the RG and then updating the Access Control by providing access to the SP to that RG.
      1. Note: in a production or a lock down environment, I would provide access to the Service Principal either to a resource group or the resource directly. Providing access directly to the subscription, may allow anybody to deploy and create resources directly in production, which may or may not be ideal for your use-case.
    5. Also, go to the “Overview” section of your Azure Subscription and copy the “Subscription Id” and Subscription Name


In Visual Studio:

  1. Open your VSTS project.
  2. Click on the gear icon at the top and choose “Services”
  3. Click “New Service Endpoint” and then “Azure Resource Manager”
  4. At the bottom choose “use the full version of the endpoint dialog”. You should see this dialog:
    image
    1. Subscription Id: from 3.5
    2. Subscription Name: from 3.5
    3. Service Principal Client Id: from 2.6
    4. Service Principal Key: from 2.10.3
    5. Tenant Id: from 1.1

Now click on “Verify Connection” and it should come back with a value of “verified”.

Friday, May 18, 2018

Using CURL for some timing test

curl url -s --write-out "Total Time: %{time_total}  Code: %{http_code} downloadsize: %{size_download}" -o output

eg:

curl https://www.google.com -s --write-out "Total Time: %{time_total}  Code: %{http_code} downloadsize: %{size_download}" -o output

outputs: Total Time: 0.234000  Code: 200 downloadsize: 24744

Saturday, March 10, 2018

My experience with a tankless water heater

The house that I lived in had a 50 gallon storage water heater which was installed when the house was built (around 1995). In 2017, it had put in almost 22 years of service and instead of waiting for it to fail and deal with all the problems that might come with it, I decided to proactively replace it.

One of the options I wanted to investigate was replacing the old water heater with a tankless water heater, which are supposed to be more efficient in their energy usage. What I found was that the price for tank + labor was quite a bit more: $1800 for storage water heater vs $4500 for a tankless (approximately). There are lots of variables in the quotes as it depends on features, permits that you might have to pull and installation location. Its normally cheaper to replace the storage water heater with another storage water heater, as it typically will go into the original location with very little plumbing changes. Whereas, with a tankless water heater, you need to move it to a location where you can mount it (typically a wall) and it needs venting, etc. Apart from the price of a tankless water heater, something else that people complain about is the amount of time it takes to get hot water out of the faucet.

Having looked at my 2 major options, I opted to go with a tankless water, and the main reason was to get more space in my small basement. I dont think the energy savings will justify the price difference (at least in the short run of 5 to 10 years), but I think the extra space I get when I finish my basement will definitely make it worth it.

Water-Heater-Old

Water-Heater-new

The above pictures show the old storage water heater and the new tankless water heater and it shows the amount of space I saved by going to a tankless system thats mounted on the wall.

There are many tankless water heater models: Rheem, Navien and Rinnai, just to name a few. I really liked the Navien for its looks, but I settled on the Rinnai RUR98in. RUR98in is a 9.8 gallon per minute (i.e. it can heat by a certain amount 9.8 gallons of water per minute. A RUR80 model would be able to heat by the same amount, 8 gallons of water per minute). Rinnai has a nice tool on their site that can recommend a tankless water heater model for your home: Residential Product Finder. The IN stands for indoor natural gas. The R in the RUR, stands for recirculation and what that means is that this water heater comes with an inbuilt recirculation pump (that Rinnai calls ThermaCirc360 technology).

Water-Heater-new-2

The ReCirc technology, requires a special bypass valve to be installed at one of your farthest sinks. Once you have the by-pass valve installed, you can set up the water heater to automatically turn on at predetermined times and it will begin circulating water and pre-heating it, so that you dont have to wait for hot-water, which is the biggest complaint that people have with tankless water heaters. In my experience, with the old storage water heater, it used to take about 1 minute until hot water began flowing from the faucet in the mornings (at the sink that was the farthest from the water heater). With the tankless water heater and recirc mode turned off, I find that it now takes about 2 minutes before hot-water flows from the faucet (I have high efficiency faucets, which means 1.5 gallons per minute). Obviously, with the recirc mode turned on, there is no waiting for hot water ( maybe 15 to 20 seconds). One issue I have with the Rinnai RUR98in, is that you get to only control the time at which you turn on the recirc pump in 1 hour increments. This seems inefficent and I wish it was controllable at a finer grain of time (15 minutes or 30 minutes). So, for now, I have the recirc pump turned off as I am trying to figure out the difference in energy consumption between using the Recirc mode and not using it. Something else to note is that Rinnai has multiple ways to turn on the Recirc pump (wifi, sensors, etc). These dont come standard with the tank and have to be bought separately.

Installation Experience

I got estimates from multiple companies. Some were proper plumbing companies, others were local plumbers. The price difference between 2 groups was on average about $750 (local plumber being cheaper). With both groups, I requested that the work be done with all the proper permits being pulled. I choose to use a local plumber, not only because they were slightly cheaper, but because I wanted to support somebody that was local and they came with good recommendations from the community. I bought the water heater personally from Lowes, as I was able to put it on my credit card, which worked out well for me. This option didnt turn out so well for me, as the plumber was very busy and the entire process from when I bought the tank to when it got installed almost took 4 months. So your mileage might vary. Something I would advice on is that, which ever option you pick, you must ask for a permit to be pulled. The main reason for this is, this is a gas appliance and you want all the safety measures to be taken during installation. Again, this is where my experience was not the best: if you look closely at my tankless water installation, there is no valve on the gas in-line to the tank, this is required for safety reasons and a building inspector would not pass the permit until this is installed. Another piece, that I think is missing is an expansion tank that should be put on the water-in line. Now, I need to find a different plumber who can remediate these issues and finish up the permit process. But apart from those 2 issues (that I know of), the local plumber did a good job of installation and everything is nice and clean and is still working 3 months since it went online.

I also, installed a Explosive Gas and Carbon Monoxide Alarm in the basement, just to be sure that there were no leaks, etc (amazon link).

I will continue to post here any future updates I have with my experience with my tankless water heater.ha


Update: 9/8/2018:

The plumber that I had used, did not install a gas shut off valve at the tankless heater. This was a safety issue for me. I had Hot Water Now come out and install a gas valve. At the same time, I had them also perform a flush of the unit. The cost was $150 + parts. Definitely gives me peace of mind and the next time, I can perform the flush myself, as the unit now has the gas shut off valve.

Also, now that I have had the unit running for almost 8 months I have some energy usage stats:

image

The unit was installed in December and the graph above shows the energy usage and temperature over that period. The winter months, show a higher natural gas usage and I think that is because of the lower temperatures. What is interesting is the energy savings over the spring and summer months, as those are the months, where the only gas consumption was by the water heater and the furnace was off for that entire period (red box). As you can see from the table below, the savings are quite dramatic:

image

Though the $ amount saved was about $40 for the entire period (Jan to Aug).


Here are some extremely useful videos regarding Tankless water heaters:

Flushing/Descalling

Tankless water heaters should last a long time (20+ years as opposed to 10 years for modern storage water heaters). But there is maintenance that one needs to do. Approximately every year or 2, you need to perform a flush to descale the inside. Here is a video that shows you how this is done: