All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks. I paste the script and result index="cdr"                                                                                                                                                     ... See more...
Thanks. I paste the script and result index="cdr"                                                                                                                                                                                                          | search "Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810" "Call.ConnectTime"=* "Call.DisconnectTime"=* |lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=strftime(diffTime/1000, "%M:%S")                                                                                                                          | table Countrie, Duracion Countrie Duracion Chile 01:17 Hong Kong 00:02 Denmark 02:01 Denmark 00:51 Denmark 00:51 Denmark 06:30 China 02:59 Uruguay 00:18
rum.node.* metrics are page-level metrics. Page level metrics are only captured if custom URL grouping rules are configured and active. Here’s a couple of sanity checks: - check that the rule is act... See more...
rum.node.* metrics are page-level metrics. Page level metrics are only captured if custom URL grouping rules are configured and active. Here’s a couple of sanity checks: - check that the rule is active - be sure to generate traffic after the rule is active - You need at least one matching domain and path rule (create the domain rule first)
Hi, My dashboard has 2 inputs, i.e dropdown , time picker. I have a requirement where I need to provide both inputs  then only my panels should appear. I tried the same ( below dashboard code) , w... See more...
Hi, My dashboard has 2 inputs, i.e dropdown , time picker. I have a requirement where I need to provide both inputs  then only my panels should appear. I tried the same ( below dashboard code) , when first time dashboard loads , I choose both inputs and panel appears.After that when I choose another item from dropdown ( keeping the same time) nothing happens. I have to change a different time and then the respective panel appears. What should I change in the code so that even if I change only dropdown item , panel should appear for the same chosen timeframe.  Dashboard Code: <form version="1.1" theme="light"> <label>Time Picker Input</label> <description>Replicate time picker issue</description> <fieldset submitButton="false"> <input type="dropdown" token="item" searchWhenChanged="true"> <label>Select Item</label> <choice value="table1">TABLE-1</choice> <choice value="table2">TABLE-2</choice> <choice value="table3">TABLE-3</choice> <change> <condition value="table1"> <set token="tab1">"Table1"</set> <unset token="tab2"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition value="table2"> <set token="tab2">"Table2"</set> <unset token="tab1"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition value="table3"> <set token="tab3">"Table3"</set> <unset token="tab1"></unset> <unset token="tab2"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition> <unset token="tab1"></unset> <unset token="tab2"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> </change> </input> <input type="time" token="time" searchWhenChanged="true"> <label>Select Time</label> <change> <set token="is_time_selected">true</set> </change> </input> </fieldset> <row depends="$tab1$$is_time_selected$"> <panel> <table> <title>Table1</title> <search> <query> | makeresults | eval Table = "Table1" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab2$$is_time_selected$"> <panel> <table> <title>Table2</title> <search> <query> | makeresults | eval Table = "Table2" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab3$$is_time_selected$"> <panel> <table> <title>Table3</title> <search> <query> | makeresults | eval Table = "Table3" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> Thanks & Regards, Shashwat
I might start with a signal like service.request.count with a filter on sf_error=true. Then if I choose “count by sf_service” for my function and visualize as a heat map, that might be a good start. ... See more...
I might start with a signal like service.request.count with a filter on sf_error=true. Then if I choose “count by sf_service” for my function and visualize as a heat map, that might be a good start. Under “chart options” you can define color thresholds so low error counts can be green, high error counts can be red, etc. If you need to work with a value that you don’t have available, such as platform or region, you may want to look at defining those as span tags and indexing them as APM metricsets. https://docs.splunk.com/observability/en/apm/span-tags/cmms.html
1. Please don't post screenshots. Copy-paste your search and results you're getting so far as text (preferably using preformatted paragraph style or a code block). It's much easier for everyone invol... See more...
1. Please don't post screenshots. Copy-paste your search and results you're getting so far as text (preferably using preformatted paragraph style or a code block). It's much easier for everyone involved 2. You can't sum string values so you have to calculate sum before converting your duration field to string (which you're doing... strangely; you should rather use convert() function, not strftime). And even better - don't use eval to make the duration field a string, use fieldformat.
Hello.  I've been trying for two days not to activate a trial Splunk Cloud instance.  I don't get the email to activate.  I've tried even creating another account.  Any thoughts or known issue with t... See more...
Hello.  I've been trying for two days not to activate a trial Splunk Cloud instance.  I don't get the email to activate.  I've tried even creating another account.  Any thoughts or known issue with the trial service automation?
Thanks for the reply.  1. The logs identify a 'createdDateTime' for each login entry. There is no direct relationship to compliance, however, I am searching for successful logins in the search, then... See more...
Thanks for the reply.  1. The logs identify a 'createdDateTime' for each login entry. There is no direct relationship to compliance, however, I am searching for successful logins in the search, then defining what percentage of those are compliant.  So, here I am trying to get the very last attempt, regardless of compliance.  2.Ideally the table would look something like this: userPrincipalName Compliant Total Percent lastLogin Joe.Camel@domain.com 4 4 100 2024-12-06T18:14:50Z Drake.Mallard@domain.com 1 4 25 2024-12-05T22:56:25Z Jane.Doe@domain.com 0 8 0 2024-11-25T07:13:22Z   Where the 'lastLogin' column is the entry for the very last 'createdDateTime' login entry per user.  
How can I get the total sum of the Duration fields? Regards.  
Yes I took out the leading *.  Thank you much 
Yes this worked, thank you very much
Hello, I want to make a drilldown with those services : and I have to apply a drilldow for (s3-bucket / vpc / ec2)   I've tried several things but nothing works   <row> <panel> <title>AWS ... See more...
Hello, I want to make a drilldown with those services : and I have to apply a drilldow for (s3-bucket / vpc / ec2)   I've tried several things but nothing works   <row> <panel> <title>AWS Services Monitoring</title> <table> <search> <!--done> <set token="Services">$click.name$</set> </done--> <query>index="aws_vpc_corp-it_security-prd" sourcetype="aws:s3:csv" ShortConfigRuleName="*" | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") |search Services!=Other | lookup aws_security_all_account_ids account_id AS AccountId OUTPUT name | table name AccountId Services ShortConfigRuleName ComplianceType OrderingTimestamp ResultRecordedTime | dedup AccountId Services ShortConfigRuleName ComplianceType | rename name as "AWS Account Name", "ComplianceType" as "Status", "OrderingTimestamp" as "Last Check", "ResultRecordedTime" as "Next Check" |fillnull value="N/A" |search $ResourceName$ $Services$ $Status$</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="count">100</option> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <option name="wrap">true</option> <format type="color" field="Status"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <drilldown> <condition match="$Services$ != &quot;s3-bucket&quot;"> <set token="Services">s3-bucket</set> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;S3_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> <condition match="$Services$ != &quot;vpc&quot;"> <set token="Services">vpc</set> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;VPC_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> <condition match="$Services$ != &quot;ec2&quot;"> <set token="Services">ec2</set> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;EC2_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> </drilldown> </table> </panel> </row>   The drilldown is supposed to ‘point’ to a second dashboard in the following way:  </panel> <panel depends="$VPC_details$"> <title>VPC DETAILS : ShortConfigRuleName=$ShortConfigRuleName$ Service=$Services$</title> <table> <search> <query>index="aws_vpc_corp-it_security-prd" | search ShortConfigRuleName=$ShortConfigRuleName$ |search AccountId=$AccountId$ |search ComplianceType=$Status$ | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") | where ResourceName!="N/A" | table AccountId ResourceName Services ComplianceType |rename ResourceName as "InstanceName" | table AccountId Services ComplianceType | dedup AccountId Services ComplianceType |appendcols [ search index="aws_vpc_corp-it_security-prd" source="s3://vwt-s3-secuprod-*" |search AccountId=$AccountId$ |table InstanceId InstanceName Platform State |dedup InstanceId InstanceName Platform State] | table AccountId Services ComplianceType InstanceId InstanceName Platform State</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="ComplianceType"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <format type="color" field="State"> <colorPalette type="map">{"stopped":#D94E17,"running":#55C169}</colorPalette> </format> <drilldown> <condition> <!-- Vérifiez que le filtre correspond exactement au service sélectionné --> <eval token="S3_details">if(match($click.value$, "s3-bucket"), "true", "false")</eval> <eval token="VPC_details">if(match($click.value$, "vpc"), "true", "false")</eval> <eval token="EC2_details">if(match($click.value$, "ec2"), "true", "false")</eval> </condition> </drilldown> </table> </panel> <panel depends="$EC2_details$"> <title>EC2 DETAILS : ShortConfigRuleName=$ShortConfigRuleName$ Service=$Services$</title> <table> <search> <query>index="aws_vpc_corp-it_security-prd" | search ShortConfigRuleName=$ShortConfigRuleName$ |search AccountId=$AccountId$ |search ComplianceType=$Status$ | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") | where ResourceName!="N/A" | table AccountId ResourceName Services ComplianceType |rename ResourceName as "InstanceName" | table AccountId Services ComplianceType | dedup AccountId Services ComplianceType |appendcols [ search index="aws_vpc_corp-it_security-prd" source="s3://vwt-s3-secuprod-*" |search AccountId=$AccountId$ |table InstanceId InstanceName Platform State |dedup InstanceId InstanceName Platform State] | table AccountId Services ComplianceType InstanceId InstanceName Platform State</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="ComplianceType"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <format type="color" field="State"> <colorPalette type="map">{"stopped":#D94E17,"running":#55C169}</colorPalette> </format> <drilldown> <condition> <!-- Vérifiez que le filtre correspond exactement au service sélectionné --> <eval token="S3_details">if(match($click.value$, "s3-bucket"), "true", "false")</eval> <eval token="VPC_details">if(match($click.value$, "vpc"), "true", "false")</eval> <eval token="EC2_details">if(match($click.value$, "ec2"), "true", "false")</eval> </condition> </drilldown> </table> </panel> <panel depends="$SERVICES_details$"> <title>SERVICES DETAILS : ShortConfigRuleName=$ShortConfigRuleName$ Service=$Services$</title> <table> <search> <query>index="aws_vpc_corp-it_security-prd" | search ShortConfigRuleName=$ShortConfigRuleName$ |search AccountId=$AccountId$ |search ComplianceType=$Status$ | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") | where ResourceName!="N/A" | table AccountId ResourceName Services ComplianceType |rename ResourceName as "InstanceName" | table AccountId Services ComplianceType | dedup AccountId Services ComplianceType |appendcols [ search index="aws_vpc_corp-it_security-prd" source="s3://vwt-s3-secuprod-*" |search AccountId=$AccountId$ |table InstanceId InstanceName Platform State |dedup InstanceId InstanceName Platform State] | table AccountId Services ComplianceType InstanceId InstanceName Platform State</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="ComplianceType"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <format type="color" field="State"> <colorPalette type="map">{"stopped":#D94E17,"running":#55C169}</colorPalette> </format> <drilldown> <condition> <!-- Vérifiez que le filtre correspond exactement au service sélectionné --> <eval token="S3_details">if(match($click.value$, "s3-bucket"), "true", "false")</eval> <eval token="VPC_details">if(match($click.value$, "vpc"), "true", "false")</eval> <eval token="EC2_details">if(match($click.value$, "ec2"), "true", "false")</eval> </condition> </drilldown> </table> </panel> </row>   When s3-bucket is selected, we point to the ‘S3_details’ panel, and so on The link target works fine, but it's the click value at the beginning with the service selection that doesn't work
Two critical question you haven't answered: What is defined as "last login time" in relation to DeviceCompliance?  Is it the very last regardless of compliance?  Or is it the last time in each comp... See more...
Two critical question you haven't answered: What is defined as "last login time" in relation to DeviceCompliance?  Is it the very last regardless of compliance?  Or is it the last time in each compliance state? How do you want to display this value in your presentation?  Use a mockup table to illustrate.  
This question has very little to do with Splunk or your familiarity with Splunk.  As @PickleRick suggested, the first most important question is: What data source do you have in Splunk? The real que... See more...
This question has very little to do with Splunk or your familiarity with Splunk.  As @PickleRick suggested, the first most important question is: What data source do you have in Splunk? The real question you need to ask is: How do I, without Splunk, determine from A source that has been ingested into Splunk "email delivery status with reason, or detailed information if delivered/not delivered, as well as multiple specific subject sources from Postfix?"  This is a data analytics forum, not a Postfix or a Linux forum.  But there is more to a search.  You need also to ask yourself: Which Email am I searching for? Assume your Splunk instance contains syslog entries from Postfix, you can determine delivery status if the recipient is known (better, if both sender and recipients are known) as well as the approximate time of that Email.  No, a typical postfix configuration does not include subject in logs.  So searching for subject is futile.  Also note: Modern syslog can use different log formats.  Which one does your system use? If you can post answers to these questions you ask yourself, volunteers here can help you construct a meaningful search.  Or you may have found the search when trying to answer those questions.
To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events ... See more...
To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. To apply here: What is "my table data"? Suppose by "text box" you mean a text input element in a dashboard that gives you a token $text_tok$, and suppose your user typed "sometext, some more, some more text".  What exact result from your "table data" do you expect?  Is the illustrated search supposed to demonstrate your attempt to use such a token?  Where is the token if so?  If not, what is it supposed to tell volunteers?
Thank you all it worked! (Grazie mile!) YEAHHH
I want to focus your attention on the method of collecting CPU utilization data in Splunk_TA_nix (cpu_metric.sh). I have been dealing with many false positive alerts regarding CPU usage in our or... See more...
I want to focus your attention on the method of collecting CPU utilization data in Splunk_TA_nix (cpu_metric.sh). I have been dealing with many false positive alerts regarding CPU usage in our organization. We have ITSI implemented and use Splunk_TA_nix to collect data. An alert is generated when 2 values ​​of CPU usage > 90%. We collect values ​​every 5 minutes. Script for collecting this data (Splunk_TA_nix/bin/cpu_metric.sh) use the command sar -P ALL 1 1. This command will display the CPU load within 1 second. If used for CPU monitoring in our setup (every 5 min) we only have information about 1 second out of five minutes. Based on this data we evaluate CPU usage. Normally the CPU usage fluctuates depending on how the commands are started, how long they run, and how difficult they are. With this method of measurement, it happens quite often that 2 values ​​cross the threshold in a row. Based on this, an alert is subsequently generated. For monitoring, however, it is important to know the average CPU utilization and not random peaks. When collecting average values, such false positive alerts would not occur (if the CPU is not overloaded). The standard way good administrators test CPU usage is, for example: sar 120 1 when they get an average CPU usage in 2 minutes. Data collection in sar via cron was once recommended to be set up like this: */10 * * * * root /usr/lib64/sa/sa1 -S XALL 600 1. This setup collected the average CPU usage over a 10-minute period, wrote this value to a sar file, and repeated this every 10 minutes. Such a setting gives a real overview of how the CPU is pulled out. Splunk does not provide a reasonable way to set these values ​​in the cpu_metric.sh script. The only way to solve it is to copy this script and modify it according to yourself. However, the connection to Splunk_TA_nix will be lost. What happens when Splunk_TA_nix is ​​upgraded? My preference is to enable CPU data collection by introducing the following stanza in our application (deployed via the deployment server) which is linked to Splunk_TA_nix. [script://$SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/cpu_metric.sh] disabled = false index = unix_perfmon_metrics But this method does not give us the possibility to set OPTIONS for sar. It would be ideal if something like this could be done: [script://./bin/my_cpu_metric.sh] disabled = false index = unix_perfmon_metrics ./bin/my_cpu_metric.sh exec $SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/cpu_metric.sh 120 1 But this doesn't work. It would not be necessary for cpu_metric.sh to be able to process some input settings and modify the use of the sar command. The same can also be applied to other scripts in this TA. If you have similar experiences, feel free to share them. If my concerns are justified, it would be right if this TA would be updated and give administrators the opportunity to set better metrics collection parameters. What do you think?
Hi @CyberWolf , please try: <your_search> | rex "^\"address\":\"(?<address>[^\"]+)\",\"city\":\"(?<city>[^\"]+)\",\"state\":\"(?<state>[^\"]+)" | eval address=address." City ".city." state ".state ... See more...
Hi @CyberWolf , please try: <your_search> | rex "^\"address\":\"(?<address>[^\"]+)\",\"city\":\"(?<city>[^\"]+)\",\"state\":\"(?<state>[^\"]+)" | eval address=address." City ".city." state ".state Ciao. Giuseppe
Hello PualPanther, We can't actually run a CURL command since we aren't sure what the webhook is. Reading over the document and contacting support, it seems that the app should create the webhook bu... See more...
Hello PualPanther, We can't actually run a CURL command since we aren't sure what the webhook is. Reading over the document and contacting support, it seems that the app should create the webhook but I'm not sure what it is.  If I need to create the webhook, I'm also not sure how to create one for the app either. 
Working on supplementing a search we are using to implement conditional access policies. The search identifies successful logins and produces a percentage of compliant logins over a period. What I am... See more...
Working on supplementing a search we are using to implement conditional access policies. The search identifies successful logins and produces a percentage of compliant logins over a period. What I am trying to add, is the last login time which is identified by the "createdDateTime" in the logs.  Here is the current search:  index="audit" sourcetype="signin" userPrincipalName="*domain.com" status.errorCode=0 | eval DeviceCompliance='deviceDetail.isCompliant' | chart count by userPrincipalName DeviceCompliance | eval total=true + false | rename true as compliant | eval percent=((compliant/total)*100) | table userPrincipalName compliant total percent I have tried adding / modifying pipes like "stats latest(createdDateTime) by userPrincilaName compliant total percent" but this is inserting the time into the true / false fields. I feel that I am modifying the data too much up front and maybe need to change around the piping order perhaps? All suggestions welcomed.
Final version... obviously inside a script or an interactive menu with parameters should work fine   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/-/-/saved/searches' --get -d 'output... See more...
Final version... obviously inside a script or an interactive menu with parameters should work fine   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/-/-/saved/searches' --get -d 'output_mode=json' -d 'count=0' | jq -r ' .entry[] | select(.acl.app == "MYAPP" and .acl.owner == "MYUSER") | .name + " : " + .acl.app + " : " + .author + " : " + .acl.owner + " : " + .acl.sharing + " : " + (.content.disabled|tostring) '   Alternative,   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/-/-/saved/searches' --get -d 'output_mode=json' -d 'count=0' | jq -r ' .entry[] | select(.acl.app == "MYAPP" and .acl.owner == "MYUSER") | [.name,.acl.app,.author,.acl.owner,.acl.sharing,.content.disabled] | @csv '   Thanks all