Powershell Table output

So you wanna create a clean, readable Powershell table with Sitecore data and are a newbie to Powershell?  Here’s a tutorial for you.

It’s assumed that you already have Sitecore Powershell Extensions installed.  (This link shows you where to download from the Sitecore Marketplace and also a brief tutorial on SPE).


This powerful PSE command offers a table viewer which supports a multitude of functionality.  If displaying the data onscreen in a dialog is desirable, this is a good option.  Bonus! You can easily export your list as a CSV, Excel, HTML, JSON, or XML doc. and even filter.

Get-Item -Path master:\* | Show-ListView -Property Name, DisplayName, ProviderPath, TemplateName, Language

list-view basic

In the example above, the columns represented are standard properties available from within the Sitecore item.

Let’s do some customization

In the ListView table below, a table title, and some detail table information (in the yellow strip) was added.  Pagination is turned on with a page size of 10.

Also, not all the values in the columns shown are available directly from the Sitecore item.  Columns can be set up with ‘Expressions’ that can range from a simple variable value to an output of a function.

finish listview

To achieve the columns with custom expressions, functions were created and then called during the report building.  NOTE:  Make sure that Powershell has already read in the function (i.e. put the functions at the top), otherwise, it will not detect the function prior to building the report.

In the screenshot below, see two functions at the top that are used within the table report call.  Also, within the variable called ‘$props’ report metadata can be set.

showlistview example code.png

Objects built from reports

In addition, a [pscustomobject] can be built that has references to an array of items.

This is easily displayed with the Show-ListView command:

$newCustomObject | Show-ListView

That’s it!

For a complete list of properties, visit the Show-ListView explanation within the PSE documentation.



Language Fallback Gotchas Sitecore 8.2.x

So you say that you’ve implemented Sitecore 8 but are having problems with Language Fallback?  You may need one or more patches.

Scenario setup:

  • Sitecore 8.2
  • Languages:
    • en (fallback language)
    • de-DE
  • Language fallback type:  field-level fallback


  1. Create and publish an EN item with several fields populated.
  2. Check Solr (en item is present)
  3. Check site. (en item is present as a web page and within website search)
  4. Create and publish a de-DE version of the item.
  5. Check Solr (de-DE item is present but computed fields are not populated)
  6. Check site. (de-DE item is present as a web page BUT not in the website search)

DIAGNOSIS:  Computed fields (which are used in search) are not populated for de-de.


  • Create and publish an EN item and not populating any fields.
  • Check Solr (en item is present)
  • Create and publish a de-DE version of the item.
  • Check Solr (de-DE item is present)
  • Populate a couple of fields in the EN item and SMART publish both languages.
  • Check Solr (en fields are populated for item)
  • Check Solr (de-DE fields are NOT populated for item)
  • Publish using REPUBLISH both languages
  • Check Solr (en fields are populated for item)
  • Check Solr (de-DE fields are populated for item)

DIAGNOSIS: Edited fallback fields do not propagate to the de-de indexed item upon Smart Publish but do upon Republish. 


Remember to always create a Sitecore Support ticket prior before applying any Sitecore patch on your own.  That said, here are the Sitecore patches that were applied to a Sitecore 8.2.1 environment that resolved many language fallback issues.  Please note that some of them were not earmarked as built for “Sitecore 8.2” however they were advised by Sitecore Support to be applied and still addressed our issue.

Sitecore.Support.217877 – Language fallback cache is not subscribed to remote events.
Sitecore.Support.222670 – The value which is specified in Standard Values for DateTime field is not added to the index.
Sitecore.Support.96931 – Computed index fields do not respect field-level language fallback settings. Item-level language fallback settings are also not respected if computed index fields are processed in parallel.
Sitecore.Support.130860 – Field level fallback indexing fails to update fallback value when original one got changed

Connecting Voice Assistants and Sitecore via Cognigy Part 3: Putting it all Together

So you wanna have an Alexa skill use Sitecore as its content source?  Sure, you can make a direct call from the Alexa Skill to a Sitecore web API but this approach can leave your code in a tangled web of ‘ifs.’  In this blog series, I’ll explain how Cognigy.AI can be used to effectively bridge the connection–and handle most of the logic–between voice assistants (like Alexa and Google Home) and Sitecore content.

Pre-requisites: Part 1: Overview and Part 2: Connecting Alexa and Cognigy

Part 3:  Putting it all Together

In Part 1, I offered a brief overview of Cognigy.AI and explained how using Cognigy.AI can save time and code by handling the conversational logic when working with a digital channel (like a voice assistant, IoT device. or AR/VR) while pulling data from an external repository such as Sitecore.

In Part 2, I dove into the necessary components to create an Alexa skill that can be used as the conversation starter with Cognigy.AI. Part 2 leaves us with a handful of JSON given to Cognigy.AI from Alexa.  What do we do with it?

json alexa intent example

Cognigy Input Store

When data is passed from an endpoint into Cognigy.AI, it is saved as JSON in the ‘data’ node of the store called the “Cognigy Input” store (or ci).  This data lasts the lifetime of the current request.

Target Flow for an Endpoint

We’ll use the JSON data in a Cognigy flow. A Cognigy.AI project is made up several components including flows and endpoints.  The endpoint defines from where the conversation can start.  An endpoint has a Target Flow.  It is within this flow that we can start to evaluate this JSON data.

Based on the number of intents that exist within your connected Alexa skill, you’ll use the appropriate conditional logic node (either if..then or switch) in your Target Flow.  If you have three intents, it might look like this (below) with the bottom node acting as a catch-all.

switch statement

The switch statement is evaluating the property ci.data.request.intent.name from the Alexa data.  In this example, the Target Flow is being used to route to other flows based on the intent name.

Http Request

Now that we’ve identified the intent, we can call a web API endpoint in Sitecore to gather the data for the intent.  This can be accomplished with the Http Request node.

In the example below, we are using a GET Http Request.

Within the Url field, I am passing the City slot value to the Api WeatherController’s GetCurrent(string city) action. In the Headers area, the OAuth access token from Alexa’s Account Linking is passed.  The data returned from the API call is stored in the ContextStore, currentweather.

http request.png

Custom Modules

Cognigy.AI also offers the ability to set up Custom Modules within their Cognigy Integration Framework.  This Http Request node could be replaced by a module for Sitecore that contains pre-canned API calls.  This allows for more flexibility when creating flows.  With Custom Modules, flows can be set up without a non-technical user knowing all the details necessary for making an API call such as the one illustrated above.  Github sample of existing Custom Modules.

Cognigy Context Store

The Cognigy context store (or cc) is another data store that is available for the lifetime of the current session with Alexa (as opposed to the current request lifetime for ci).
Within the Http Request node properties, the ContextStore name is defined and JSON returned from the api call is stored in cc.[ContextStore].

In the image above, note the {{cc.apiBaseUrl}}. Cognigy.AI also allows you to store default data in the cc by defining the “Default Context” of each flow.  Flows can share the Default Context as well.

default context

Say Nodes

So, now you have your data back from Sitecore and you want to send it to Alexa.  Alexa accepts data using the Speech Synthesis Markup Language (SSML) format (which is based on XML) to assist with context and intonation.  Take a look at the Alexa SSML documentation.

The SSML is added to a Say node “Alexa” channel via the SSML Editor and it is sent to Alexa so she can convert the text to speech. Cognigy.AI also supports sending visual imagery (text only or text and image … video coming soon?) to devices with a display monitor.

Note that the Say node is reused among other channels (see the channel icons at the top).  This allows you to have a single set of flows used by every channel but allowing for the proper channel input.

say node

Finishing the Conversation

When a flow is complete, the Say node SSML is passed back to Alexa via the Alexa endpoint in Cognigy.

output json to alexa.png

Alexa then notes the outputSpeech in the response and speaks the SSML!

End Notes

I have just touched the tip of the iceberg of Cognigy.AI capabilities but this blog series should give you a starting point for your exploration into voice technologies.  Stay tuned for an upcoming post regarding using Sitecore data within a Cognigy.AI chatbot!


cognigy logo
Special thanks to Andy and Derek of Cognigy for allowing use of their demo site!

  • Andy Van Oostrum – VP Sales North America : a.vanoostrum@cognigy.com
  • Derek Roberti – VP Technology North America: d.roberti@cognigy.com

See other posts in this blog series:

Connecting Voice Assistants and Sitecore via Cognigy Part 2: Connecting Alexa and Cognigy

So you wanna have an Alexa skill use Sitecore as its content source?  Sure, you can make a direct call from the Alexa Skill to a Sitecore web API but this approach can leave your code in a tangled web of ‘ifs.’  In this blog series, I’ll explain how Cognigy.AI can be used to effectively bridge the connection–and handle most of the logic–between voice assistants (like Alexa and Google Home) and Sitecore content.

Pre-requisite: Part 1: Overview

Blog Series Part 2:  Connecting Alexa and Cognigy.AI

In the previous post, I offered a brief overview of Cognigy.AI and explain how using Cognigy.AI can save time and code by taking on the conversational logic when working with a digital channel (like a voice assistant, IoT device. or AR/VR) and pulling data from an external repository such as Sitecore.  Let’s dive into the Alexa Skills side of things and also see how to hook it up via Cognigy.AI’s endpoint.

Amazon Alexa Skills Kit Developer Console

The Alexa Skills Kit Developer Console is already well-documented so I won’t go into too much detail.  Instead, I’ll focus on what you need to know to get started with an Alexa Skill.


When you set up a new skill, it needs to be invoked with a phrase.  Example invocations:

daily horoscopes Alexa, enable daily horoscopes
current weather Alexa, load current weather


To connect Alexa and Cognigy.AI, you’ll need the Cognigy.AI generated endpoint url and you’ll place it in the Alexa Skills Service Endpoint setting OR you can deploy the endpoint url to the skill from Cognigy.AI by selecting the skill.  For faster throughput, Alexa allows for multiple endpoints based on region.

alexa service endpoint


An intent is an interpretation of a user request and responds with an action to fulfill the request.

For example, let’s say that your local system stores the local daily forecast of every major city in a designated area. An Alex Skill intent would be the starting point to fulfill the request of getting the daily weather (“DailyWeatherIntent”).  To match the Alexa user’s spoken word to an intent, the intent must have utterances.

Alexa also has some built-in intents that you can incorporate within your skill.

built in intents


An utterance is a sample phrase assigned to an intent that represents an anticipated request from the user.  This user request is accurately fulfilled with the action provided by the intent.

Utterances for the “DailyWeatherIntent” example might be the following:

  • What is the current weather?
  • What’s going on outside for today?
  • Is it going to rain?

While we are on the topic of utterances, we must talk about Natural Language Processors (NLPs).  Their primary responsibility is to convert text-to-speech and speech-to-text. With machine learning, they are able to handle variations of an utterance and still provide a positive match to an intent.  This allows you to minimize the number of utterances that you have to list for an intent.  An example, the phrase “Tell me the weather” could be uttered and auto-matched to the utterance “What is the current weather?”  Each NLP has different levels of sophistication so the number of utterances that you’ll need for an intent will vary based on the NLP used.  As you would expect, Alexa’s has been around for a while and is quite sophisticated.  Cognigy.AI has its own configurable NLP and is growing more intelligent with every release.


Aside from the NLP, utterances can be minimized with the use of slots.  Slots are words found in utterances that can be used in two ways:

  • as a synonym – to extend the NLP matching logic for an intent
  • as a variable – to pass on to the data repository (e.g. Sitecore)

A slot is represented within an utterance surrounded by curly braces {}. Let’s look at examples of both.

Synonym Slots

Utterance: What is the {weather}?

If the slot, {weather}, is defined with alternative context phrases like ‘forecast’ or ‘chance of rain,’ the following utterances are implied along with the utterance above and therefore do not need to be defined within an intent:

What is the forecast?
What is the chance of rain?

Variable Slots

Utterance: What is the current weather in {city}?

What is the current weather in Los Angeles?
What is the current weather in LA?

The slot can also represent a variable that is passed on to the data repository. The slot type can be a finite list of acceptable slot values stored in the skill.  Each slot value can also have synonyms (like “Los Angeles” and “LA”) and an optional ID which could be used to better match to a value in the data repository.  The Alexa Developer Skill kit comes with pre-defined slot types or you can spin up your own.

A slot value could also be a true variable (without a list of possible values in the skill) and simply passed to the data repository for reconciliation.  Use the AMAZON.SearchQuery slot type.  Only one such slot type can be used per utterance.

Utterance: Switch to customer account {accountNumber}.

“Switch to customer account 12345.”

Required Slots

An intent may require that a slot is populated prior to its fulfillment.  In the example directly above, accountNumber is a slot that is important to have to be able to switch customer accounts.   The value for this could be gathered as shown in the utterance above or by a dialog between Alexa and the user.  Notice below that the utterance does not have a slot.

Utterance:  Switch the customer account


User: “Switch my customer account”
Alexa:  “To which account would you like to switch?”
User: “12345”
Alexa: “OK, one moment …”

slot filling

Account Linking

It is worth mentioning Account Linking.  Through OAuth and the Alexa App on the user’s phone, the user can log in one time to the data repository (like Sitecore) and establish an OAuth connection for the Alexa Skill.  This allows the user to access account information that would normally be exposed by logging in to their account on your web application.  Some great documentation is located on the Alexa Skills Kit site.

Account linking will enable Alexa to fulfill questions like:

Alexa, ask [your invocation] what is the next course that I should take?

Alexa, tell [your invocation] that I’d like to pay my invoice.

Alexa, ask [your invocation] for the balance on my account.

Alexa and Cognigy.AI JSON

And finally, let’s take a look at the JSON being sent from Alexa to Cognigy.AI.

Below we see that under context, the OAuth access token is sent (if account linking is set up).  This can be passed on to Sitecore from within Cognigy.AI.

Under request, we see the name of the intent and also any slots that have been fulfilled for the intent.  In the example, below we have a “WeatherIntent” with a slot of “City” fulfilled with the value of “Seattle.”

json alexa intent example

In the next post of the series, we’ll see how to consume this data within Cognigy.AI, grab data from Sitecore, and send an answer back to Alexa!  Stay tuned!

cognigy logo

  • Andy Van Oostrum – VP Sales North America : a.vanoostrum@cognigy.com
  • Derek Roberti – VP Technology North America: d.roberti@cognigy.com

See other posts in this blog series:

Connecting Voice Assistants and Sitecore via Cognigy Part 1: Overview

So you wanna have an Alexa skill use Sitecore as its content source?  Sure, you can make a direct call from the Alexa Skill to a Sitecore web API but this approach can leave your code in a tangled web of ‘ifs.’  In this blog series, I’ll explain how Cognigy.AI can be used to effectively bridge the connection–and handle most of the logic–between voice assistants (like Alexa and Google Home) and Sitecore content.

Blog Series Part 1:  Overview

Sitecore data and voice assistants? Why?

Since this is a Sitecore-based blog, let’s start here.  What value can an Alexa skill offer to a company with an experience platform like Sitecore?

Sitecore XP and XM are warehouses of content for a company.  Some of this content (when properly curated) can be beneficial to an Alexa end user.

Alexa, ask [your company] how I contact their support team?
– information from the website’s contact us page
Alexa, ask [your company] for the latest trends in real estate.
– the teaser or summary content for a popular article
Alexa, ask [your company] what products do they recommend for me?
– based on user browser habits or purchase history
Alexa, ask [your company] for the amount of my latest invoice
– information from Experience Commerce

From informational conversations to complex re-ordering within Experience Commerce, for every Sitecore application, there’s a use case for Sitecore content re-use across digital channels other than websites.  Cognigy.AI can help with this integration.

What is Cognigy.AI?

From their website, Cognigy says, “COGNIGY.AI is the Conversational AI Platform focused on the needs of large enterprises to develop, deploy and run Conversational AI’s on any conversational channel.”

Hmmm … how about a description without all the business language? …

Cognigy.AI lets you integrate chatbots (most used), virtual reality/augmented reality, voice assistants, and robotics/IoT with your existing websites and systems (like Sitecore) using natural language processing and easy-to-build logical flowcharts.  This reduces the amount of custom code written within your systems and allows for easy re-use of logic across channels.

The Cognigy software is always improving and they love client feedback. Their releases are usually chock-full of new goodies!

Cognigy.AI Basics

All the Cognigy.AI functionality is well-documented in their developer guide so I’ll provide highlights of the components that you may use for your Alexa/Sitecore connection.

project flow nodes


You’ll start off creating a project which you can match up to a general business need.  As an example, let’s consider logic that handles questions about the current weather.  You will have flows that will handle potential questions about the weather and endpoints to external channels (like a voice assistant) to gather the question.  This weather example could be considered a single Cognigy.AI project.


Flows hold the ‘guts’ of the logic in Cognigy.AI.  They are the graphical representation of the conversation for the project.  In keeping with our current weather example, a flow might offer the answer to the question, “What is the current temperature in {city}?” Notice, the curly braces around the generic word, ‘city’?  It’s called a slot and the flow could use the user’s value to the slot to properly route the conversation.

When constructing flows, it’s important to plan and follow the Single Responsibility principle.  Break up one large flow into smaller ones using the Execute Flow or Switch Flow nodes.


Nodes are the building blocks of a flow, similar to the Process, Decision, and Data symbols of a flowchart.  They are available within the Flow Editor.   There are many nodes available when building your flow and, again, the developer guide does a great job of describing them.  Here are the node groups with some example usage:

Node groups Examples:

Logic->If node
Just what you think it is … if, then, else logic; the cornerstone of the flow

Basic->Say node
Used to store text (or text-to-speech) output

Api & Db->Http Request
Add a GET, PUT, POST, DELETE request to an API, passing collected data


Who is Cognigy.AI communicating with?  Where is the conversation coming from and going to?  A channel endpoint is another key component assigned in the project. (This is not to be confused with the data repositories; ERP or CMS systems from where data may be retrieved to complete a conversation). When an endpoint is set up, Cognigy.AI creates an endpoint URL to provide to the channel. Here’s a list of the current endpoints with Cognigy.AI 3.3.  Don’t see what you want here?  As mentioned, Cognigy loves client feedback.  Reach out!

cognigy endpoints

In the next post of this series, I discuss the fundamentals of setting up an Alexa Skill and what is passed to Cognigy.AI from Alexa.  Ready to get your Alexa Skills on?

cognigy logo

  • Andy Van Oostrum – VP Sales North America : a.vanoostrum@cognigy.com
  • Derek Roberti – VP Technology North America: d.roberti@cognigy.com

See other posts in this blog series:

Symposium 2018: Empowering Women in Technology and #movethedial

Another Symposium under the belt.  For me, this was the best one yet. The content of the sessions I attended were decent (see other post) but the bonding was my big takeaway.

Women in Digital #movethedial luncheon

The Wednesday luncheon was extra special.  The stage was owned by 6 well-spoken, smart, inspiring women.  However, they weren’t the only actors in the room.  At one point during the session, the panel called out to the men in the audience to thank them for attending.  It appeared that there was perhaps one man per every two to three tables.  (Thank you for attending!!)  I was taken aback when I realized how different the energy of the room was.  This was the polar opposite ratio that you’d expect at a tech conference and I finally felt like I truly belonged.  It’s now apparent to me that my gender somewhat drives my confidence in my career interactions; I hadn’t realized this before.  I’ll save my musings and stories for the Women In Sitecore blog that is coming soon, led by Amy Winburn.

Elevate your career experience: Empowering women in technology

I met some fabulous women at this year’s conference. We initially ‘met’ (virtually) over the summer after the Women In Sitecore panel session was approved but getting to hang out with them, see them in sessions, in the hallways, and at the parks was special for me.  For once I felt the kinship that I had seen by others at other Symposiums.  We had practiced our timing and words several times and each time, at different points each practice, I was moved to the point of tears, including during the session. I had always figured that other women MUST be experiencing the same trials and tribulations as me but it seemed awkward to reach out.  Again, I’ll be writing more about my past experiences and bonding in the upcoming Women In Sitecore blog.


Men and women both, please consider contributing and learning with us in the Sitecore Slack community channel:  #womenofsitecore.

My Symposium 2018 Takeaways and Sketchnotes

This symposium was a different one for me.  The sessions dedicated to women and the session LED by women were truly inspiring.  Also, I tried a different note-taking technique this year, sketchnoting.  See my previous post.  Here are my sketchnotes and some quick takeaways from each session:

The Cortex Engine:  Process at Scale

“Cortex is to Sitecore processing like xConnect is to xDB.”

Cortex is a set of components used to gather large amounts of data and process it via machine learning into marketing-ready information.  It uses tasks (distributed tasks and deferred actions–lower priority) to do the processing that is normally done manually by a marketing team.

A big takeaway from this session is that you’ll need a data scientist to provide direction to the developer to fine tune the Cortex model and then to be able to consume the processed data. Cortex will be available in Sitecore 9.1.

cortex engine

Where Machine Learning Meets Social #ThinkYouKnowMe

In an age where many details about individuals are at our fingertips, it’s important not to overrate what you have on hand.

In this session, my big takeaway is that the static demographic info provided by social media is not always the smartest dataset to be used for personalization.  This creates a broad stroke reach which is the opposite purpose of personalization.  Gather data only after the user converts on a goal and look at keywords and tags, not the demographics of the person.

machine learning

.NetCore and Sitecore 9.1 architecture

In this session,  we learned the goal of the Sitecore 9.1 architecture, which has components built in netcore, is to make Sitecore have smaller, testable components that leverage netcore whenever possible (like for diagnostics and logging … why re-invent the wheel?)

The new Sitecore Host lays the groundwork for a single way to start Sitecore applications.  It’s lean, versatile and has a low-hosting cost.

Sitecore Host is currently the base for the following three 9.1 components … Sitecore Identity (which controls everything auth-related including federated auth features), Horizon (drag-n-drop tool to eventually replace our friend the Experience Editor), and Universal Tracker (which gathers and quickly processes large amounts of mobile data before it eventually ends up on xDB).

Netcore and 9.1

Universal tracker and mobile analytics

Sitecore Universal Tracker – a new approach for tracking interactions in mobile.

Sitecore Universal Tracker is built to quickly process the huge amounts of data coming from mobile devices prior to it eventually going into xDB.  It alleviates some of the processing that xConnect does and does not use Web Tracker. You can configure channels that have a pre-filter (get rid of garbage data), enrichment (where the action is), and post-filter (further filtering).  Performance/pass rate of interactions/second goes up with DTU units.

Sitecore 9.1 Universal Tracker will include the SDK and Analytics recording.  Personalization will be included in a future release.

Universal Tracker

How to go faster: When Sitecore squadrons feel the need for speed

How to structure your dev squads to maximize collaboration and productivity.I had another session in mind for this timeslot but the speaker of this session, Adam Simmonds, was wonderfully complimentary on my sketchnotes so I simply had to attend his session. And I’m glad I did!

It was great to see what was accomplished on Open Universities Australia’s site to speed up deployments.  My biggest takeway was the documenting done in the value stream mapping session that reveals the bottlenecks.  I can apply that today.

Clean up -> Reduce -> Automate

need for speed