All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have below query:  index=demo-app  TERM(Application) TERM(Received) NOR TERM(processed) |stats count by ApplicationId |fields ApplicationId |eval matchfield=ApplicationId |join matchfield [... See more...
I have below query:  index=demo-app  TERM(Application) TERM(Received) NOR TERM(processed) |stats count by ApplicationId |fields ApplicationId |eval matchfield=ApplicationId |join matchfield [search index=demo-app  TERM(App) TERM(transaction) |stats count by MessageCode |fields MessageCode |eval matchfield =MessageCode] |stats count(matchfield) When i run this search query the statics values are  limiting to 50,000 How to tweak my query to see complete results without restricting.  
I have logs with a Customer field where the name of the customer is not consistent.     customer=Bobs Pizza   customer=Bob's Pizza   customer=Bobs Pizzeria I want to use an automatic lookup to ... See more...
I have logs with a Customer field where the name of the customer is not consistent.     customer=Bobs Pizza   customer=Bob's Pizza   customer=Bobs Pizzeria I want to use an automatic lookup to change all to a standard name without needing to changing existing searches.   customer_lookup.csv   customer_name,standard_customer_name   Bobs Pizza,Bob's Pizza   Bobs Pizzeria,Bob's Pizza I am trying to do this with a lookup table in the search before I try to make it an automatic lookup.  | lookup customer_lookup customer_name as Customer output standard_customer_name AS Customer This lookup only works if the Customer returned in the search is actually in the lookup table.  So Customer="Bobs Pizza" is in the result, but Customer="Frank's   Artichokes" is not.  I can't add all customers to the table.  I have tried many forms of the lookup.  I can get a list with the original Customer name and the standard customer name in one exists, but that won't work for current searches.      Can this be done?  I would think it could cause problems since someone could add an automatic lookup to hide certain things if needed.  
We are trying to use the appdynamics node dependency and are currently unable to resolve it. It appears that it's unavailable at the expected AppDynamics CDN location. Last week, version 23.5 was suc... See more...
We are trying to use the appdynamics node dependency and are currently unable to resolve it. It appears that it's unavailable at the expected AppDynamics CDN location. Last week, version 23.5 was successfully found and downloaded, but today neither 23.5 nor 23.7 appears to be available: npm install appdynamics npm ERR! code E404 npm ERR! 404 Not Found - GET https://cdn.appdynamics.com/packages/nodejs/23.7.0.0/appdynamics-native-node.tgz npm ERR! 404 npm ERR! 404 'appdynamics-native@https://cdn.appdynamics.com/packages/nodejs/23.7.0.0/appdynamics-native-node.tgz' is not in this registry. npm ERR! 404 npm ERR! 404 Note that you can also install from a npm ERR! 404 tarball, folder, http url, or git url. Has anyone else been able to resolve this issue, or is there a known issue resolving this dependency?
Hi! I am faced with the following task and do not understand which way to go. I want to create an alert that will be triggered when a certain application is deleted. For example:       inde... See more...
Hi! I am faced with the following task and do not understand which way to go. I want to create an alert that will be triggered when a certain application is deleted. For example:       index=main source=<custom_source> Message="<program_name> is uninstalled"       Everything works as it should, I use the internal event type and that's not the question. Some software generates 2 events when UPDATING OR UPGRADING, the first: the program is uninstalled and the second: the program is installed. Therefore, in this case, my alert gives a false alarm. I have generated the following alert logic to correct false positives:     Search events for the last 30 Minutes: index=main source=<custom_source> Message="<program_name> is uninstalled" Next we need to check whether there were installation events: index=main source=<custom_source> Message="<program_name> is installed" on the machines from the first request. And if there was no installation event on the machine after the uninstalled event in the last 30 minutes, then issue an alert.      I cant create a query from this logic. If you could help me with advice, I would be very grateful. P.S. We looked in the direction of events from the application log and MSI Installer Logs, but in our case this is not applicable and we must use custom source.   Thanks for you help, have a nice day.
I have a CSV file that populates a standard dropdown . The selection made, in this standard dropdown, then populates the second dropdown which is a Multi-Select. I then use the token, from the Mu... See more...
I have a CSV file that populates a standard dropdown . The selection made, in this standard dropdown, then populates the second dropdown which is a Multi-Select. I then use the token, from the Multi-Select, to perform my search. This works great when there is only one item selected in the Multi-Select dropdown, however, when multiple items are selected it returns incorrect results. Any help would be most appreciated! This is my source code:   <form version="1.1"> <label>JRD AUR Divert Zone Multi Dropdown (CSV) Counts by Downtime</label> <description>Testing for CSV Dropdown Functionality</description> <fieldset submitButton="false" autoRun="false"> <input type="time" token="field1" searchWhenChanged="true"> <label>Time Selection</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="Locationid_tok" searchWhenChanged="false"> <label>Select Bins Location</label> <fieldForLabel>Locationid</fieldForLabel> <fieldForValue>Locationid</fieldForValue> <search> <query>| inputlookup AUR_Bin_Divert_Zones.csv | dedup Locationid | table "Locationid"</query> <earliest>0</earliest> <latest></latest> </search> <choice value="*">All</choice> </input> <input type="multiselect" token="Zoneid_tok" searchWhenChanged="true"> <label>Select Divert Zone(s)</label> <fieldForLabel>Zoneid</fieldForLabel> <fieldForValue>Zoneid</fieldForValue> <search> <query>| inputlookup AUR_Bin_Divert_Zones.csv | search Locationid="$Locationid_tok$" | table "Zoneid"</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <choice value="*">All</choice> <delimiter> OR </delimiter> </input> </fieldset> <row> <panel> <table> <search> <query>index=5_ip_cnv sourcetype=ftae_hmi_alarms $Zoneid_tok$ |eval Time=_time |transaction Alarm startswith=*$Zoneid_tok$",1,0,192" endswith=*$Zoneid_tok$",0,0,192" maxevents=2 |eval Downtime = strftime(duration, "%M:%S") |makemv delim=";" Message |eval EventType=mvindex(Message,0) |rename Description as EventLocation |eval Date=lower(strftime(_time,"%+")) |eval date_wday=lower(strftime(_time,"%A")) |eval date_hour_EST=tonumber(strftime(_time, "%H")) |where NOT (date_wday="saturday" OR date_wday="sunday") |where (date_hour_EST&gt;=9 AND date_hour_EST&lt;19) |rename Downtime as "Downtime in Minutes:Seconds" |sort +Time |table Date EventType EventLocation "Downtime in Minutes:Seconds" </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">60</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <format type="number" field="Time"> <option name="precision">3</option> <option name="useThousandSeparators">false</option> </format> </table> </panel> </row> </form>
Need help to write a generic query to capture PII Data ( social security numbers / credit card numbers /  email addresses )  from  application log ?
Hi team, I am not able to import the health rules using the API access token method or by my username and password. Whenever I try to import it using either of the methods, the response comes as "nu... See more...
Hi team, I am not able to import the health rules using the API access token method or by my username and password. Whenever I try to import it using either of the methods, the response comes as "null". Could you please guide me as to where I am going wrong? The command I am using while pushing using the access token(I have all the default permissions set for my client id)(not working):  curl -X POST "https://customername.saas.appdynamics.com/controller/healthrules/application_name" -F file=@test.xml -H "Authorization: Bearer <access_token>" Note: I am fetching the access token using the below command, which is working fine curl -X POST -H "Content-Type: application/x-www-form-urlencoded" "https://customername.saas.appdynamics.com/controller/api/oauth/access_token" -d 'grant_type=client_credentials&client_id=<client_id>@<account_name>&client_secret=<client_secret>' Command I am using while pushing using username, password(not working):  curl -X POST --user username@accountname:password https://customername.saas.appdynamics.com/controller/healthrules/Application_name -F file=@test.xml The health rule xml file below contains seven health rules related to Average Response time, Calls, CPU, Error, Garbage collection, Heap, and Memory. The XML file of health rules: <health-rules> <health-rule> <name>APM_Average_Response_Time_x_tier_name</name> <type>NODE_HEALTH_TRANSACTION_PERFORMANCE</type> <description/> <enabled>true</enabled> <is-default>false</is-default> <always-enabled>true</always-enabled> <duration-min>30</duration-min> <wait-time-min>30</wait-time-min> <affected-entities-match-criteria> <affected-infra-match-criteria> <type>SPECIFIC_TIERS</type> <application-components> <application-component>x_tier_name</application-component> </application-components> </affected-infra-match-criteria> </affected-entities-match-criteria> <critical-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Average Response time (ms)</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>9.0</condition-value> <operator>EQUALS</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Average Response Time (ms)</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </critical-execution-criteria> <warning-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Average Response time (ms)</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>7.0</condition-value> <operator>EQUALS</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Average Response Time (ms)</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </warning-execution-criteria> </health-rule> <health-rule> <name>APM_Calls_x_tier_name</name> <type>NODE_HEALTH_TRANSACTION_PERFORMANCE</type> <description/> <enabled>true</enabled> <is-default>false</is-default> <always-enabled>true</always-enabled> <duration-min>30</duration-min> <wait-time-min>30</wait-time-min> <affected-entities-match-criteria> <affected-infra-match-criteria> <type>SPECIFIC_TIERS</type> <application-components> <application-component>x_tier_name</application-component> </application-components> </affected-infra-match-criteria> </affected-entities-match-criteria> <critical-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Calls Per Minute</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>5.0</condition-value> <operator>EQUALS</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Calls per Minute</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </critical-execution-criteria> <warning-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Calls Per Minute</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>3.0</condition-value> <operator>EQUALS</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Calls per Minute</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </warning-execution-criteria> </health-rule> <health-rule> <name>APM_CPU_x_tier_name</name> <type>INFRASTRUCTURE</type> <description/> <enabled>true</enabled> <is-default>false</is-default> <always-enabled>true</always-enabled> <duration-min>30</duration-min> <wait-time-min>30</wait-time-min> <affected-entities-match-criteria> <affected-infra-match-criteria> <type>SPECIFIC_TIERS</type> <application-components> <application-component>x_tier_name</application-component> </application-components> </affected-infra-match-criteria> </affected-entities-match-criteria> <critical-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>CPU | % Busy</display-name> <short-name>A</short-name> <condition-value-type>ABSOLUTE</condition-value-type> <condition-value>90.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>false</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Hardware Resources|CPU|%Busy</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </critical-execution-criteria> <warning-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>CPU | % Busy</display-name> <short-name>A</short-name> <condition-value-type>ABSOLUTE</condition-value-type> <condition-value>75.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>false</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Hardware Resources|CPU|%Busy</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </warning-execution-criteria> </health-rule> <health-rule> <name>APM_Errors_x_tier_name</name> <type>NODE_HEALTH_TRANSACTION_PERFORMANCE</type> <description/> <enabled>true</enabled> <is-default>false</is-default> <always-enabled>true</always-enabled> <duration-min>30</duration-min> <wait-time-min>30</wait-time-min> <affected-entities-match-criteria> <affected-infra-match-criteria> <type>SPECIFIC_TIERS</type> <application-components> <application-component>x_tier_name</application-component> </application-components> </affected-infra-match-criteria> </affected-entities-match-criteria> <critical-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Errors per minute</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>5.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Errors per Minute</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </critical-execution-criteria> <warning-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Errors per minute</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>3.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Errors per Minute</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </warning-execution-criteria> </health-rule> <health-rule> <name>APM_Memory_x_tier_name</name> <type>INFRASTRUCTURE</type> <description/> <enabled>true</enabled> <is-default>false</is-default> <always-enabled>true</always-enabled> <duration-min>30</duration-min> <wait-time-min>30</wait-time-min> <affected-entities-match-criteria> <affected-infra-match-criteria> <type>SPECIFIC_TIERS</type> <application-components> <application-component>x_tier_name</application-component> </application-components> </affected-infra-match-criteria> </affected-entities-match-criteria> <critical-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Memory | Used (%)</display-name> <short-name>A</short-name> <condition-value-type>ABSOLUTE</condition-value-type> <condition-value>90.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>false</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Hardware Resources|Memory|Used %</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </critical-execution-criteria> <warning-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Memory | Used (%)</display-name> <short-name>A</short-name> <condition-value-type>ABSOLUTE</condition-value-type> <condition-value>75.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>false</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>Hardware Resources|Memory|Used %</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </warning-execution-criteria> </health-rule> <health-rule> <name>APM_Heap_JVM_x_tier_name</name> <type>INFRASTRUCTURE</type> <description/> <enabled>true</enabled> <is-default>false</is-default> <always-enabled>true</always-enabled> <duration-min>30</duration-min> <wait-time-min>30</wait-time-min> <affected-entities-match-criteria> <affected-infra-match-criteria> <type>SPECIFIC_TIERS</type> <application-components> <application-component>x_tier_name</application-component> </application-components> </affected-infra-match-criteria> </affected-entities-match-criteria> <critical-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Memory:Heap|Used %</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>5.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>JVM|Memory:Heap|Used %</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </critical-execution-criteria> <warning-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>Memory:Heap|Used %</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>3.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>JVM|Memory:Heap|Used %</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </warning-execution-criteria> </health-rule> <health-rule> <name>APM_Garbage_Collection_JVM_x_tier_name</name> <type>INFRASTRUCTURE</type> <description/> <enabled>true</enabled> <is-default>false</is-default> <always-enabled>true</always-enabled> <duration-min>30</duration-min> <wait-time-min>30</wait-time-min> <affected-entities-match-criteria> <affected-infra-match-criteria> <type>SPECIFIC_TIERS</type> <application-components> <application-component>x_tier_name</application-component> </application-components> </affected-infra-match-criteria> </affected-entities-match-criteria> <critical-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>GC Time spent per min (ms)</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>5.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>JVM|Garbage Collection|GC Time Spent Per Min (ms)</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </critical-execution-criteria> <warning-execution-criteria> <condition-aggregation-type>ALL</condition-aggregation-type> <entity-aggregation-scope> <type>ANY</type> <value>0</value> </entity-aggregation-scope> <policy-condition> <type>leaf</type> <display-name>GC Time spent per min (ms)</display-name> <short-name>A</short-name> <condition-value-type>BASELINE_STANDARD_DEVIATION</condition-value-type> <condition-value>3.0</condition-value> <operator>GREATER_THAN</operator> <condition-expression/> <use-active-baseline>true</use-active-baseline> <trigger-on-no-data>false</trigger-on-no-data> <enable-triggers>false</enable-triggers> <min-triggers>15</min-triggers> <metric-expression> <type>leaf</type> <function-type>VALUE</function-type> <value>0</value> <is-literal-expression>false</is-literal-expression> <display-name>null</display-name> <metric-definition> <type>LOGICAL_METRIC</type> <logical-metric-name>JVM|Garbage Collection|GC Time Spent Per Min (ms)</logical-metric-name> </metric-definition> </metric-expression> </policy-condition> </warning-execution-criteria> </health-rule> </health-rules>   Thank you Regards, Mohammed Saad
Hi there!  I am attempting to set up the Microsoft Security Add-On on our Splunk Cloud (Victoria Experience).  I was able to install the app on the Splunk Cloud Platform, and understand my next ste... See more...
Hi there!  I am attempting to set up the Microsoft Security Add-On on our Splunk Cloud (Victoria Experience).  I was able to install the app on the Splunk Cloud Platform, and understand my next step is to configure the app to connect to Azure AD.  I receive the following errors on the Configuration and Inputs tabs:   I confirmed that the sc_admin role has all the needed permissions, and that my account has the role sc_admin. I also confirmed that the list_inputs permission is assigned (the closest post I could find on Splunk Answers said to check this).  Any ideas what I should check next? Thank you!    
Hello,  I am trying to get the above addon working in our environment. Our environment comprises of 2 heavy forwarders and a deployment server, the heavy forwarders filter all data to Splunk Cloud. ... See more...
Hello,  I am trying to get the above addon working in our environment. Our environment comprises of 2 heavy forwarders and a deployment server, the heavy forwarders filter all data to Splunk Cloud.  When setting the above addon up I have confirmed that both Heavy forwarders can connect to our on-premise Jira server and both heavy forwarders pull down data from Jira e.g. projects etc.  We have the setup in passthrough mode with passthrough being enabled within Splunk cloud, I'm aware that Splunk cloud will connect to the heavy forwarders and pull information from the KVstore but this does not appear to be happening. The addon within Splunk cloud still try's to connect to Jira when an account is populated in the configuration. When removing the configuration it complains about needing an account. A bearer token has been created within Splunk cloud and both heavy forwarders have been populated with the bearer token.  Has anyone successfully set this up and if so do you have any pointers?   
Hi, on a brand new Splunk install, the app tries using urrlib2, but Splunk only has urllib3. There is an exception where a "," is used instead of an "as" (line 388 of splunk_rest_client.py). It tr... See more...
Hi, on a brand new Splunk install, the app tries using urrlib2, but Splunk only has urllib3. There is an exception where a "," is used instead of an "as" (line 388 of splunk_rest_client.py). It tries to use something from a module cStringIO which does not exist in Splunk or the app.    
Hello all,   The Splunk default admin name has been changed and now I get the below error on Splunk DB connect. Please can someone let me know which conf file holds this info so I can change it to ... See more...
Hello all,   The Splunk default admin name has been changed and now I get the below error on Splunk DB connect. Please can someone let me know which conf file holds this info so I can change it to the new username?     Splunkd error: HTTP 400 -- User with name=admin does not exist
Hello Everyone.. Please reply if you have any solution to add show more and show less function in splunk dashboard table column. lets say there is one table with 4 columns - C1, C2, C3, C4 and 5 ro... See more...
Hello Everyone.. Please reply if you have any solution to add show more and show less function in splunk dashboard table column. lets say there is one table with 4 columns - C1, C2, C3, C4 and 5 rows - R1, R2, R3, R4, R5. Consider Column C2 has 1 value in R1, 10 values in R2, 4 values in R3, 5 values in R4, 2 values in R5. I have to make 1 value to show as default and if there is value more than one then "show more" option should get enabled to expand the remaining details and "show less" option to collapse the expanded details. Thanks in Advance!
Hi Team, Below is my query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=i... See more...
Hi Team, Below is my query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","")|head 7 | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True ] |rename busDt as Business_Date |rename fileName as File_Name |rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes) |table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus |sort -Business_Date I am sorting on the basis of business date but my startTime and EndTime is not coming correct. Can someone guide me Below is the screenshot for the same  
How to change font size of texts inside bar charts, column, table using dashboard xml source?    i tried font-size: 15 but didn't work in xml source
Hello, For solid reasons that I can't go into here, we have a topology of... AWS CloudWatch-> Kinesis Firehose -> AWS Delivery Stream Object ->AWS Lambda ->HEC listener on a Heavy Forwarder ->  Th... See more...
Hello, For solid reasons that I can't go into here, we have a topology of... AWS CloudWatch-> Kinesis Firehose -> AWS Delivery Stream Object ->AWS Lambda ->HEC listener on a Heavy Forwarder ->  That Heavy Forwarder -> Another Heavy Forwarder -> Splunk Cloud.  I'm pretty sure that (apart from having 1 HF forward to a second before hitting Splunk Cloud), that is the reference architecture for CloudWatch events. There is no Splunk indexing going on in our infrastructure.  We are just forwarding loads of information to Splunk Cloud for indexing and analysis there. We can establish latency through most of that chain, but we are interesting in determining the latency from when our events land in Splunk Cloud, to those events being visible for analysis.  Is there a handy metric or query we can re-use? Thanks in advance...
Hi All, Below is my search query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by sr... See more...
Hi All, Below is my search query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by src_ip | lookup Stores_Inventory src_ip OUTPUT Device | table src_ip Device src app  I have 3 fields src_ip, src and device. I am getting the field values for src from the first 2 lines of the query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by src_ip  I am trying to build a search query that finds src_ips that have mismatched src name and device name.   Thanks in advance.
Hey @carasso and @splunk team I want to build the splunk query using the below requirements: Data Source: sourcetypepcf app_name=xyz HTTP_PATH="/*" Time Frame: The query should cover a 4-week per... See more...
Hey @carasso and @splunk team I want to build the splunk query using the below requirements: Data Source: sourcetypepcf app_name=xyz HTTP_PATH="/*" Time Frame: The query should cover a 4-week period (earliest=-4w). Display: Calculate and display the average count per hour for the current day of the week for HTTP_STATUS_CODE.  Using the reference #https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-Chart-Average-of-Last-4-Thursdays-vs-Today-in-a-Timechart/m-p/167913?_ga=2.262359695.2003626727.1695023755-301331303.1687328075&_gl=1*y4c9e*_ga*MzAxMzMxMzAzLjE2ODczMjgwNzU.*_ga_GS7YF8S63Y*MTY5NTAyMzkyOC4xLjEuMTY5NTAyNjA5Ny4wLjAuMA..*_ga_5EPM2P39FV*MTY5NTAyMzc1Ni4yLjEuMTY5NTAyNjA5OS4wLjAuMA.. We build the query but while we calculate the average we are getting zero results. Query is - [search ] earliest=-4w | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | timechart span=1h avg(count) by HTTP_STATUS_CODE. I would except to take the average by hour for all 4 days and build the timechart span by 1hours for 24 hours.   Can you please for the same...
  2023-08-04 08:53:00.473, ID="15438391", EventClass="10", textdata="exec up_tcsbs_ess_ins_ipsysuser @IID=20231619,@RoleID=NULL,@AdpGuid='F31B78A6-285F-4E8A-A063-8581CEA30AD4',@PersonId='641',@dob=... See more...
  2023-08-04 08:53:00.473, ID="15438391", EventClass="10", textdata="exec up_tcsbs_ess_ins_ipsysuser @IID=20231619,@RoleID=NULL,@AdpGuid='F31B78A6-285F-4E8A-A063-8581CEA30AD4',@PersonId='641',@dob='1991-03-16 00:00:00',@ssn='114784117',@tin=default,@companyname=default,@contactzip='181037802',@hiredate='2023-07-14 00:00:00',@adpUserId=NULL,@associateId=default,@essRoleId='15'", HostName="DC1PRRUNVBT0034", ClientProcessID="20496", ApplicationName=".Net SqlClient Data Provider", LoginName="TcStandard", SPID="5893", Duration="3247079", StartTime="2023-08-04 09:53:00.473", EndTime="2023-08-04 09:53:03.72", Reads="95", Writes="5", CPU="0", Error="0", DatabaseName="iFarm", RowCounts="6", RequestID="0", EventSequence="1447598967", SessionLoginName="TcStandard", ServerName="DC1PRMSPADB40"  
Can Kaspersky Security Center with free license export syslog to Splunk. And if it can, how to configure a new file monitor input at forwarder to export syslog from Kaspersky Security Center?
Hi Splunkers, I have to perform a UF config and I don't know if some problem could rise. Let me explain better. For a customer, we are collecting data from Windows Systems using UF. All selected log... See more...
Hi Splunkers, I have to perform a UF config and I don't know if some problem could rise. Let me explain better. For a customer, we are collecting data from Windows Systems using UF. All selected logs come fine. Now, we have to collect logs from Windows DNS query; they are collected in debug mode and, then, stored in a path. So, before any UF or Splunk action, the flow is: Win DNS set on debug mode -> Log forwarded on a server -> Logs stored on server's path. Due the high volume of collected store, on that server there are 2 scripts that follow a retention policy and, in a nutshell, delete logs older than 1 day. This because when DNS forward logs, write a file of maximum 500 MB and then another one is created. So, files are writed until threshold is reached. Due we want use UF to monitor that path, our customer asked us its behavior regarding file monitoring; his doubt is how UF works when monitoring file, expecially the current writing one. My knoledge is that UF should work exactely any other Data Input File & Directory monitoring: if we tell, in inputs.conf stanza, "monitor path X" it shuld simply monitor each file in a sequential manner; am I right?