Thursday, October 18, 2018

Cannot convert the literal 'undefined' to the expected type 'Edm.Guid'

If you get this error when working in Javascript with DynamicsCrm (or any system that uses Odata), and you get this error, then you most likely need to setup your object like this:

var request = {

    ObjectId: {guid: "E8C656B7-6AD1-E811-A967-000D3A30D5DB"}

};

Otherwise, the request that will get sent will have undefined value and the response will have an error with the following message:

message=An error occurred while validating input parameters: Microsoft.OData.ODataException: Cannot convert the literal 'undefined' to the expected type 'Edm.Guid'. ---> System.FormatException: Guid should contain 32 digits with 4 dashes (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).

Tuesday, October 16, 2018

Dynamics CRM–Business Rules or Workflows?

When should you use a business rule (BR) and when should you use workflows?

First thing we should understand are that business rules are just another type of a process (just like Workflows and Business Process Flows). You can validate this by performing an advanced find and setting the category to Business Rule. Other options for category are:

image

Now when you create a business rule, there are 3 options (2 really), one that applies it to forms (all or specific) and the other that applies it to the entity. When you choose entity and create your BR, and activate it, you will find 2 entries in processes with the same name. One of these handles the javascript part (that gets run client side). The other is a synchronous workflow that runs on create and on update of the fields that are in conditionals of the business rule. You can look at the JavaScript that will run by looking at the ClientData field on Process. Also, the XAML field contains workflow.

Why is this important to know? Here is why:

1. If you have a business rule that needs to be run client side, then Business Rules are your best option. This is especially true if you want to show error messages or perform data validation.

2. But if you are going to have an entity level business rule and it doesn't really contain validation or error messages, then you have a choice: If you use a Workflow, you can decide to run it in the background (asynchronously).

Some other things to consider:

Business Rules have a nicer UI. But, every business rule that fires at the entity level creates a new synchronous workflow that fires on create and on update of a single field. If you use a workflow, you have more choices: you can create a single workflow that runs on Create, that does all the things that need to be done on create, this in my opinion is more efficient, then having multiple workflows that are all firing one after the other.

Also, look at the limitations of BR as defined in microsoft docs: https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/admins-customizers-dynamics-365/mt826761(v=crm.8)#limitations

Inner workings of Business Rule javascript:

Finally, if you are curious about the inner workings of business rules, see the code at: https://github.com/rajrao/CRM-Tools/tree/master/JavaScript/BusinessRules

But basically here are the main things to know:

The javascript for the business rule is injected into the formscript for the form.
The function itself is setup to run on change of the fields that are in the conditionals.
The function is run once at form startup.

Wednesday, October 03, 2018

ODataException: An undeclared property 'aaa_xxxxxid' which only has property annotations in the payload but no property value was found in the payload

If you get the error: then know this: case matters and the field name is the schema name of the field!

An error occurred while validating input parameters: Microsoft.OData.ODataException: An undeclared property 'aaa_xxxxxid' which only has property annotations in the payload but no property value was found in the payload. In OData, only declared navigation properties and declared named streams can be represented as properties without values.

Please note that the field names are case sensitive and go by the “schema name”. In my case, the schema name was ‘aaa_XxxxxId’. Once that was fixed the code began working!

var entity = {};
entity["customerid_account@odata.bind"] = "/accounts(21876381-dc67-e811-a954-000d3a378ca2)";
entity["aaa_XxxxxId@odata.bind"] = "/aaa_xxxxx(c9edf6c0-d670-e811-a958-000d3a3780dd)";
entity.caseorigincode = 1;
entity["msdyn_incidenttype@odata.bind"] = "/msdyn_incidenttypes(6760f919-77b0-e811-a95c-000d3a378f36)";

Tuesday, October 02, 2018

Creating and Using Tokens with Azure Service Bus

The following notes are with regards to the “.net standard” Microsoft.Azure.ServiceBus library and details what each parameter is (because I found the documentation lacking in this regard).

Creating a token

Tokens are created using a TokenProvider. There are many options in instantiating a token provider and then one I am showing here utilizes the Shared Access Signature (SAS). But, if you are running your code in Azure, a better option would be to use the ManagedServiceIdentityProvider, as one doesnt need to share keys, etc.

To create a token using the “CreateSharedAccessSignatureTokenProvider(string keyName, string sharedAccessKey)”, you need a key name and its key. This is retrieved from the “Shared access policies” blade in Azure. Once you have these 2 values, you can create a tokenProvider.

TokenProvider tokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(keyName, sasKey);

The CreateSharedAccessSignatureTokenProvider has overloads, in which you can specify the TTL of the token and the scope (namespace vs entity).

To create the token, you need to use the “GetTokenAsync(string appliesTo, TimeSpan timeout)” on the tokenProvider. The “appliesto” defines the either the namespace or the entity and should be defined as follows:

For access to the entire namespace: testservicebusinstance.servicebus.windows.net
For access to an entity in the namespace called myFirstQueue: testservicebusinstance.servicebus.windows.net/myFirstQueue

Using the token

The GetTokenAsync method returns a token. The token contains a property called TokenValue. You use this TokenValue to work with the servicebus.

var tokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(token.TokenValue);
var client = new QueueClient("testservicebusinstance.servicebus.windows.net", "myFirstQueue", tokenProvider);
.
.
.
await client.SendAsync(message);

Controlling of access to the service bus:

There are 2 things that are possible:
1. restrict scope to the entity
2. specify if the token can manage,send or listen

The first one, I showed you in “creating a token”. The second one is not so obvious and is controlled by the Claims on the SaS policy that you are using (the keyName and sasKey for which you pulled from the SAS Policies blade in Azure).

When to use tokens:

Say you have a lot of different micro-services and they all need to get access the ServiceBus. Also, you dont want to give each one a key, as you now will have to figure out what to do if the key got compromised. Also, you dont want all services to be able to do anything and everything on the ServiceBus. In this case, you would need to stand up a token provider service. The microservices, would then authenticate with the token service and the token service would then determine what the microservice is authorized to do and return a token that has the appropriate claims and scope.

Best practices:

As you can see with the SharedAccessSignature option, you still need to provide your token service the sas-key. Instead, with the release of managed service identities support on ASB, one can do this without the sas-key. So, if your service is going to be hosted in Azure, then MSI is definitely the way to go.

Wednesday, September 26, 2018

LogicApp–HttpRequest trigger: retrieving query parameters

With a Http request based trigger, how do you retrieve the query parameter values (eg: &queryParamName=hello%20world)? I did not find an answer easily on the internet, so here it is:

@triggerOutputs()['queries']['queryParamName']

image

The above example, will retrieve the query param value for a query param named “queryParamName” and return it as a response.

Another way to achieve the same thing, but in a simpler manner is to use the RelativePath:

image

When you add the relative path, you will have to get the updated HTTP Get URL (that gets generated for you) and in the URL you will find

invoke/queryParamName/{queryParamNameVar}

To call the logic-app, all you have to do is replace the bolded section with a value (invoke/queryParamName/hello%20world/.

I prefer using the relative-path method, because you will find that the logicApp designer will make the variable (in this case queryParamName), easily available to other actions, which is not just nicer, but also protects you from typos.

Wednesday, September 12, 2018

Databricks–Using Databricks Secrets

Here are all the steps needed to setup a secret in databricks (not key-vault) in Databricks. This works with the standard version of DB:

Install Databricks CLI:
1. install databricks cli (needs python)
     pip install databricks-cli
2. Setup databricks token (needs token from user-settings in Databricks. Also needs the host url: eg Host: https://northcentralus.azuredatabricks.net)
     databricks configure --token
3. Create a scope
     databricks secrets create-scope --scope dbSecretsScope --initial-manage-principal "users"
4. Add a key-value to scope
     databricks secrets put --scope dbSecretsScope --key mydbPassword --string-value myPasswordValue
5. List keys in scope
     databricks secrets list --scope dbSecretsScope


Create a Python notebook and add the following code in a cell and run it


sqlserver = xxxxx.database.windows.net'
port = '1433'
database = 'myFirstDb'
user = 'iamspecial'
pswd = dbutils.secrets.get("dbSecretsScope","mydbPassword")
print "password is", pswd #will display redacted – very important!
table = 'myFirstTable'

## Load Data Frame ##
df1 = spark.read \
   .option('user', user) \
   .option('password', pswd) \
   .jdbc('jdbc:sqlserver://' + sqlserver + ':' + port + ';database=' + database, table)

df1 #to display the dataframe to make sure we connected

Tuesday, September 04, 2018

Dynamics Error: Only owner can revoke access to the owner.

Error: Only owner can revoke access to the owner. CallerId: xxxxxx, OwnerId: yyyyyyy

I was getting this error when attempting to revoke access from yyyyyy via code on an account entity. What I found out was that while yyyyyy was the owner of the entity, I could successfully revoke access from the entity.

Instead, I first had to assign the entity to the new owner (xxxxxx) and only then could I revoke access from yyyyyy.

Tuesday, August 21, 2018

Dynamics CRM–debugging when using executionContext

Xrm.Page is being deprecated and one needs to pass the executionContext to the javascript function to get at the data that was available from Xrm.Page. But, how do you test your function from the browser’s debbuger?

What I do is I create my own executionContext variable and then pass it to my js function to see how it would work.

fakeExecutionContext = {
     getFormContext: function(){return Xrm.Page;},
     context: Xrm.Page.context
}

and you would call it like this:

myTestDynamicsFunction(fakeExecutionContext);

Thursday, August 16, 2018

Azure PowerShell Error: Please provide a valid tenant or a valid subscription

You might get the “Please provide a valid tenant or a valid subscription” error when running some Azure powershell commands (I got it when I ran “Get-AzSKAzureServicesSecurityStatus”).

The first thing to make sure is that you have at least version 5 of PowerShell. You can find this by running “$PSVersionTable” and making sure your PSVersion value is 5.0 or greater.

In my case, what I found out was the reason I was getting this error was that the subscriptionid I was providing was not the same one as the account that I was logged in on the computer. So, what I had to do was to log in to Azure using “Login-AzureRmAccount”. Once I had done that, running the command worked flawlessly.

Thursday, August 02, 2018

Azure Logic Apps–Sql Server Triggers being skipped

I was trying to use the Logic Apps “Sql Server” trigger (When an item is created), and the trigger was always being skipped.

After much testing and looking, I found the following requirements for a SQL Server trigger: https://flow.microsoft.com/en-us/blog/introducing-triggers-in-the-sql-connector/

Limitations

The triggers do have the following limitations:

  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger


A table with the following definition will trigger LogicApps, when data is added (or updated, based on the type of trigger you use):

CREATE TABLE [dbo].[Test2](
     [Id] [int] IDENTITY(1,1) NOT NULL,
     [RowVersion] [timestamp] NOT NULL,
     [Message1] [nvarchar](50) NOT NULL,
     [Message2] [nvarchar](50) NOT NULL,
  CONSTRAINT [PK_Test2] PRIMARY KEY CLUSTERED
(
     [Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

Its unfortunate that LogicApps works without any errors, when you dont have an identity and a rowversion column defined on your table. It would have saved me some time if it had!