Azure Log Analytics has recently been enhanced to work with a new query language. Recently, the language and the platform it operates on have been integrated into Log Analytics, which allows us to introduce a wealth of new capabilities, and a new portal designed for advanced analytics.
This post reviews some of the cool new features now supported. The examples shown throughout the post can also be run in our Log Analytics playground — a free demo environment you can always use, no registration needed.
This is as simple as you can get, but it's still a valid query, that simply returns everything in the Event table. Grabbing every record in a table usually means way too many results though. When analyzing data, a common first step is to review just a bunch of records from a table, and plan how to zoom in on relevant data. This is the general structure of queries — multiple elements separated by pipes. The output of the first element i. In this case, the final query output will be 10 records from the Event table.
After reviewing them, we can decide how to make our query more specific. Often, we will use where to filter by a specific condition, such as this:.How to query Azure Storage logs in Azure Monitor Log Analytics - Azure Tips and Tricks
Looks like our query still returns a lot of records though. To make sense of all that data, we can use summarize. Summarize identifies groups of records by a common value, and can also apply aggregations to each group. Try it out on our playground! Sometimes we need to search across all our data, instead of restricting the query to a specific table.
Scanning all data could take a bit longer to run. To search for a term across a set of tables, scope the search this way:. Note that search terms are by default case insensitive.
Search queries have many variants, you can read more about them in our tabular operators. We often find that we want to calculate custom fields on the fly, and use them in our analysis.Application Insights log-based metrics let you analyze the health of your monitored apps, create powerful dashboards, and configure alerts. There are two kinds of metrics:. Since standard metrics are pre-aggregated during collection, they have better performance at query time.
This makes them a better choice for dashboarding and in real-time alerting. The log-based metrics have more dimensions, which makes them the superior option for data analysis and ad-hoc diagnostics.
Use the namespace selector to switch between log-based and standard metrics in metrics explorer. This article lists metrics with supported aggregations and dimensions. The details about log-based metrics include the underlying Kusto query statements. For convenience, each query uses defaults for time granularity, chart type, and sometimes splitting dimension which simplifies using the query in Log Analytics without any need for modification.
When you plot the same metric in metrics explorerthere are no defaults - the query is dynamically adjusted based on your chart settings:. The selected Time range is translated into an additional where timestamp The selected Time granularity is put into the final summarize Any selected Filter dimensions are translated into additional where clauses.
The selected Split chart dimension is translated into an extra summarize property. For example, if you split your chart by locationand plot using a 5-minute time granularity, the summarize clause is summarized If you're new to the Kusto query language, you start by copying and pasting Kusto statements into the Log Analytics query pane without making any modifications. Click Run to see basic chart. As you begin to understand the syntax of query language, you can start making small modifications and see the impact of your change.
Exploring your own data is a great way to start realizing the full power of Log Analytics and Azure Monitor. Metrics in the Availability category enable you to see the health of your web application as observed from points around the world.
Configure the availability tests to start using any metrics from this category.
Query across resources
The Availability metric shows the percentage of the web test runs that didn't detect any issues. The lowest possible value is 0, which indicates that all of the web test runs have failed.
The value of means that all of the web test runs passed the validation criteria. The Availability test duration metric shows how much time it took for the web test to run. For the multi-step web teststhe metric reflects the total execution time of all steps.
The Availability tests metric reflects the count of the web tests runs by Azure Monitor. They provide great insights into your users' experience with your web app. Browser metrics are typically not sampled, which means that they provide higher precision of the usage numbers compared to server-side metrics which might be skewed by sampling. The metrics in Failures show problems with processing requests, dependency calls, and thrown exceptions. This metric reflects the number of thrown exceptions from your application code running in browser.
The Exceptions metric shows the number of logged exceptions. The count of tracked server requests that were marked as failed. You can customize this logic by modifying success property of request telemetry item in a custom telemetry initializer.You can now apply the powerful Analytics query language to high-volume NoSQL data streams that you import from any source.
You can display the results in Power BI or Azure dashboards, and get alerts if specified thresholds are crossed. Hitherto, Analytics queries have been applicable to performance and usage telemetry collected by Azure Application Insights from your live web app.
Now, you can either join imported data with your app telemetry, or instead run queries to analyze completely separate data. You could automate a daily analysis of route popularity and congestion. Analytics can run complex queries, including joins, aggregations, and statistical functions, to extract the necessary results.
You can view the results in the range of charts available in Analytics. Or you could have Power BI run the queries each day, plot the results on maps, and present them on a website. To analyze your data with Analytics, you need an account in Microsoft Azure. Sign in to the portal and set up a Storage resource in Azure. Create an Application Insights resource. Then navigate from there to the Analytics page. Before you analyze some data, you need to tell Analytics about its format. This opens a wizard where you name the data source and define its schema.
In the flight data example, the files are in CSV format. The sample data file includes headers, and the schema is automatically inferred from it. You get the opportunity to update the inferred data types and field names if necessary. Data files of hundreds of MB are easily handled by Analytics.
The script uploads the data to Azure storage, and then notifies Analytics to ingest it. The query language is powerful but easy to learn, and has a piped model in which each operator performs one task — much easier to work with than the nested SELECTs of SQL. Now we can perform a join on the tables:. The main job of Analytics is as the powerful query tool of Application Insights, which monitors the health and usage of your web applications.
One of the reasons for importing data into Analytics is to augment the telemetry data. For example, to make the telemetry reports more readable, query URLs can be translated to page names. Analytics can be applied to your data today.
Read detailed how-to here. Whether you want to enrich your data or to analyze the logging data of your application, you can easily add a new data source and start ingesting the data. With a high-volume ingestion, you can now apply the power of Analytics query language to your own custom data.
As always, feel free to send us your questions or feedback by using one of the following channels:. Blog IT Pro. Analyze your data with Application Insights Analytics. Updated on July 23, The functionality described in this blog was retired and no longer exist in Application Insights. Alternatively, you can send your custom log to the Azure Monitor log storewhich is Log Analytics. You can query this data from Log Analytics or your Application Insights resource using cross-resource queries.We have enhanced the schema of Analyticsthe powerful query language of Visual Studio Application Insights.
These changes improve the discoverability of the data, simplifying your queries and exposing new metrics and dimensions of your telemetry. This schema is the single place where you should look for any performance counter that is reported from your application. Whether the performance counter is automatically collected by Application Insights SDK or you have configured the application to send other performance countersthe data can be found in this schema.
The performanceCounters schema exposes the categorycounter name, and instance name of each performance counter. Counter instance names are only applicable to some performance counters, and typically indicate the name of the process to which the count relates.
For example, to compare the performance of your app on the different machines:. For example:. At last we expose client side metrics.
Many of you have asked for this and we are happy to support your request. Set up your app for client-side telemetry in order to see these metrics.
The new schema includes the following metrics: networkDurationsendDurationreceiveDurationprocessingDuration and totalDuration. These metrics indicate the lengths of different stages of the page loading process. We also provide you with the performanceBucket property to quickly analyze and group the client side metrics by buckets. For example, to find out which pages on your site are most popular, and how long they take to load:.
The data is there - all you have to do is just query it. As always, feel free to send us your questions or feedback by using one of the following channels:. Blog IT Pro. Application Insight Analytics: Schema updates. For example, to find out what performance counters are being reported: To get a chart of available memory over the recent period: The performanceCounters schema exposes the categorycounter name, and instance name of each performance counter.
For example: browserTimings schema At last we expose client side metrics. As always, feel free to send us your questions or feedback by using one of the following channels: Suggest ideas and vote in Application Insights ideas Join the conversation at the Application Insights Community Try Application Analytics.This becomes even more interesting as Azure Data Explorer and its documentation is an excellent place to educate yourself on the Kusto Query Language.
Perform ad-hoc queries on terabytes of data with Azure Data Explorer—a lightning-fast indexing and querying service to help you build near real-time and complex analytics solutions. Azure Data Explorer allows you to quickly identify trends, patterns, or anomalies in all data types inclusive of structured, semi structured and unstructured data.
There's also a 4-hour Pluralsight course which will really jump start you on KQL. Queries generally begin by either referencing a table or a function. You start with that tabular data and then run it through a set of statements connected by pipes to shape your data into the final result. So if you start with TableA and you want to only keep events that have a certain key,you would use:.
Extend adds a new field and project can either choose from the existing set of fields or add a new field. These two statements produce the same result:.
The summarize operator can perform aggregations on your dataset. For example, the count operator mentioned above is short for:. The bin function is often used in conjunction with summarize statements. It lets you group times or numbers into buckets. You technically don't have to specify a join kind but I recommend that you always do. It makes for easier readability and the default probably isn't what you expect.
Note that joins are only on equality and generally it's expected that the keys have the same name on both sides. If they aren't the same, you can use a project statement to make them the same or use an alternate key specification syntax:.
There are some handy functions to get used to like "now " which gives the current UTC time and "ago ". The ago function is especially handy when you're looking for recent data. Imagine that you have a bunch of entities and each one sends a row to your table periodically. You want to run a query over the latest message from each entity. Use these functions with care though. If they are used on a huge table and the cardinality of the grouping is high, it can destroy performance.
You can read the documentation to learn about the various types, but since I deal with a lot of time series data, the one I use the most is timechart. It's a line chart where the x-axis is a datetime and everything else goes on the y-axis. It automatically keeps the x-axis spaced nicely even if your data doesn't have every time specified. So by using Azure Notebooks you can get quickly up to speed on Kusto Query Language and create some replicable notebooks and resources.Use it to monitor your live applications.
It will automatically detect performance anomalies, and includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. It's designed to help you continuously improve performance and usability.
It works for apps on a wide variety of platforms including. NET, Node. It integrates with your DevOps process, and has connection points to a variety of development tools.
In addition, you can pull in telemetry from the host environments such as performance counters, Azure diagnostics, or Docker logs.
You can also set up web tests that periodically send synthetic requests to your web service. All these telemetry streams are integrated into Azure Monitor. In the Azure portal, you can apply powerful analytic and search tools to the raw data.
The impact on your app's performance is very small. Tracking calls are non-blocking, and are batched and sent in a separate thread. Application Insights is aimed at the development team, to help you understand how your app is performing and how it's being used.
It monitors:. Install Application Insights in your app, set up availability web testsand:. Measure the effectiveness of each new feature that you deploy. Application Insights is one of the many services hosted within Microsoft Azure, and telemetry is sent there for analysis and presentation.
So before you do anything else, you'll need a subscription to Microsoft Azure. It's free to sign up, and if you choose the basic pricing plan of Application Insights, there's no charge until your application has grown to have substantial usage. If your organization already has a subscription, they could add your Microsoft account to it. There are several ways to get started.
Begin with whichever works best for you. You can add the others later.
You may also leave feedback directly on GitHub. Skip to main content. Exit focus mode. Learn at your own pace.This document explains NRQL syntax, clauses, components, and functions. This document is a reference for the functions and clauses used in a NRQL query. Other resources for understanding NRQL:. All other clauses are optional. The clause definitions below also contain example NRQL queries. It's followed by one or more arguments separated by commas. In each argument you can:.
Use the FROM clause to specify the data type you wish to query. You can merge values for the same attributes across multiple data types in a comma separated list.
Application Insight Analytics: Schema updates
This query returns the count of all APM transactions over the last three days:. This query returns the count of all APM transactions and Browser events over the last three days:. NRQL returns the results that fulfill the condition s you specify in the clause. Determines if the string value of an attribute is in a specified set.
Determines if the string value of an attribute is not in a specified set. If the substring does not begin or end the string you are matching against, the wildcard must begin or end the string. This query returns the browser response time for pages with checkout in the URL for Safari users in the United States and Canada over the past 24 hours.
Use the AS clause to label an attribute, aggregator, step in a funnel, or the result of a math function with a string delimited by single quotes. The label is used in the resulting chart. This query returns a count of people who have visited both the main page and the careers page of a site over the past week:. For example, you could FACET your PageView data by deviceType to figure out what percentage of your traffic comes from mobile, tablet, and desktop devices.
FACET clauses support up to five attributes, separated by commas. If you are faceting on attributes with more than 1, unique values, a subset of facet values is selected and sorted according to the query type. When selecting minmaxor countFACET uses those functions to determine how facets are picked and sorted. When selecting any other functionFACET uses the frequency of the attribute you are faceting on to determine how facets are picked and sorted.
For more on faceting on multiple attributes, with some real-world examples, see this New Relic blog post. This query shows cities with the highest pageview counts. This query uses the total number of pageviews per city to determine how facets are picked and ordered. This query shows the cities that access the highest number of unique URLs.
This query uses the total number of times a particular city appears in the results to determine how facets are picked and ordered. Advanced segmentation and cohort analysis allow you to facet on bucket functions to more effectively break out your data. Cohort analysis is a way to group results together based on timestamps.
You can separate them into buckets that cover a specified range of dates and times. Separate multiple conditions with a comma. You can combine multiple attributes within your cases, and label the cases with the AS selector. Data points will be added to at most one facet case, the first facet case that they match.