All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Using regex, what is the syntax, to trim a timestamp formatted like 2022-01-06 01:51:23 UTC so that it only reflects the date and hour, like this  2022-01-06 01? 
Hello Splunk Experts: From a system, we receive following events in splunk. I would like to get the event which doesn't have logEvent as Received but has only logEvent as Delivered. traceId field... See more...
Hello Splunk Experts: From a system, we receive following events in splunk. I would like to get the event which doesn't have logEvent as Received but has only logEvent as Delivered. traceId field will have same value on both Received and Delivered events. Here in the below example, traceId=101 is such an event.     {"logEvent":"Received","traceId": "100","message":"Inbound received", "id" : "00991"} {"logEvent":"Delivered","traceId": "100","message":"Inbound sent", "id" : "00991-0"} {"logEvent":"Delivered","traceId": "101","message":"Inbound sent", "id" : "00992-0"} {"logEvent":"Received","traceId": "102","message":"Inbound received","id" : "00993"} {"logEvent":"Delivered","traceId": "102","message":"Inbound sent","id" : "00993-0"}    
If I had logs for the `_internal` index and logs for a `linux_os` index on a Heavy Forwarder, does the HF prioritize the `linux_os` index data prior to the `_internal` data on the host? Is there any ... See more...
If I had logs for the `_internal` index and logs for a `linux_os` index on a Heavy Forwarder, does the HF prioritize the `linux_os` index data prior to the `_internal` data on the host? Is there any precedence for data Splunk is monitoring?  Does Indexers have a precedence for what kind of data to index first?
Hello, I am looking for a solution to send Splunk alerts to Splunk mobile application. So far I was using the "Splunk Cloud Gateway" splunkbase on my Splunk lab (standalone Splunk VM) which was bas... See more...
Hello, I am looking for a solution to send Splunk alerts to Splunk mobile application. So far I was using the "Splunk Cloud Gateway" splunkbase on my Splunk lab (standalone Splunk VM) which was based on Splunk 8.0.x. Since I wanted to upgrade recently to Splunk 8.2.4, I needed to also move to the "embedded" Splunk Secure Gateway app. Since I did not needed the former indexed data, I decided to remove Splunk 8.0 and do a fresh install of 8.2.4 (no upgrade on Splunk side nor migration from Cloud Gateway to Secure Gateway). After "opt-in" for Secure Gateway, the gateway managed to stay "connected" for a duration of ~10 minutes (I can see "ping-pong" messages in Secure Gateway logs/_internal index). But it stopped suddenly to work (status in dashboard is now desperately showing  "not connected") ... Last "ping-pong" exchange is the following one: This was "today morning " at 0:20 AM (twenty past midnight, 10 minutes after gateway optin/config). On the errors side, the first one ever I can see is this one (7 min before 0:20 AM): Then this one when it stopped the "ping-pong" traffic (at 0:20 AM):  And then such ones:   I've checked all the logs of the gateway, enabled DEBUG traces, analyzed the python code, checked these errors, changed the "timeouts" for bigger values in the app conf file, looked at the "Troubleshooting sections" of the doc ... but I could not find yet why it suddenly stopped to work. To be complete, I am running on a lab VM (2 vCPU, 8GB of RAM) (which is under the prereq "specs", I know) and with SSL self-sign certificate generated by Splunk when I changed the server settings to use HTTPS. I am behind a Sophos UTM 9.7 which is protecting my home network and I've made a rule to disable filtering (like SSL scanning etc) for URLs that ends by *.spl.mobi  Would you have any directions or clues for fixing that connectivity issue? Thanks in advance   
Hi,  We are trying to pull information from some of the database tables in ServiceNow into our Splunk Enterprise environment using the add-on, but since the tables are fairly heavy, we aren't able... See more...
Hi,  We are trying to pull information from some of the database tables in ServiceNow into our Splunk Enterprise environment using the add-on, but since the tables are fairly heavy, we aren't able to successfully get it all working as some of the tables end up with the following error message: 2022-02-10 09:08:31,159 ERROR pid=12171 tid=Thread-20 file=snow_data_loader.py:collect_data:181 | Failure occurred while getting records for the table: syslog_transaction from https://---.net/. The reason for failure= {'message': 'Transaction cancelled: maximum execution time exceeded', 'detail': 'maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information. Now, I was told by the ServiceNow support that we might be able to prevent that from happening (and hence, get it successfully going) by introducing query parameters. Has anyone experience on how to configure the add-on to comply with that? As a reference, the ServiceNow support sent me this: " I'm not familiar with the configuration options for the Splunk addon. However if you would like your API requests to take shorter time I would suggest that you limit the number of records you are fetching per request, use pagination and also limit the number of columns you are selecting. a). You can implement pagination by using the URL parameter sysparm_offset. As an example in the initial request you can configure sysparm_offset=0&sysparm_limit=100, then on the next call you will increment the offset by 100 to sysparm_offset=100&sysparm_limit=100. You will need to keep on incrementing the offset after each response until you reach the limit of 25000. b). In order for you to limit the number of columns you will need to use the URL parameter sysparm_fields. For example if you only require the task number and short description you will configure the URL parameter as sysparm_fields=number,short_description&sysparm_limit=100. Below is an example of a complete URL with both sysparm_fields and sysparm_offset configured. api/now/table/task?sysparm_limit=100&sysparm_query=ORDERBYDESCsys_created_on&sysparm_fields=number,short_description&sysparm_offset=0 " Does anyone have an idea on how to proceed to better get it working? Any ideas/suggestions would be really helpful. Thanks, Artelia
Hello guys!! I have a question about the lookup command when the lookup file contains strings and regular expressions. The following is an example. field var_1 : String field var_2 : String f... See more...
Hello guys!! I have a question about the lookup command when the lookup file contains strings and regular expressions. The following is an example. field var_1 : String field var_2 : String field var_3 : Regex or String field var_4 : String lookup file ------lookup file----------------------------- var_1, var_2, var_3, var_4 data10, data11, .+(:?aaa|bbb), data13 data20, data21, .+(:?ccc|ddd|eee), data23 data30, data31, .+(:?eee)fff+(:?ggg|hhh), data33 -------------------------------------------------- I would like to return var_4 when var_1, var_2, and var_3 are matched by the lookup command, but var_3 may contain a regular expression, and the lookup needs to match the condition of the regular expression. As you know, regular expressions are not allowed in the lookup-field in the lookup command. ↓↓↓ Regular expressions cannot be used ↓↓↓ | makeresults | eval var_1 = "data10", var_2 = "data11" , var_3 = "ABC123aaa" | lookup var_1 var_2 var_3 OUTPUT var_4 It is necessary to use the lookup file (csv). If the lookup command is not the best way to solve this problem, then another command such as join is fine to use. Obviously, I don’t intend to use only the lookup command. I’m looking for other ways to do it as well. Can someone please help me with this? Thanks in advance!!
I need to filter different error values for a range of different instruments. To do this, I have created a macro and lookup that uses the host-field and the name of the measurement field to determine... See more...
I need to filter different error values for a range of different instruments. To do this, I have created a macro and lookup that uses the host-field and the name of the measurement field to determine if the value should be removed or not. This part of the function works well, but in some cases, we also need to correct the measurements in different time periods due to calibrations etc. For those cases, I have created columns in the lookup named "case_<number>" which contains time_start, time_stop, value_to_remove, adjustment_value. An example would be host=extensometer_001 with a distance_mm-field where we need to correct the following measurements: - Remove -222 between unix time 1644546240 and 1644586240 - Adjust +30 for measurements between unix time 1641546240 and 1644566200 The case-columns in the lookup would then look like this: case_001=1644546240,1644586240,-222 case_002=1641546240,1644566200,,30 To be able to handle zero or more cases per host and field, I use foreach in the following way:   | foreach "case_*" [| makemv delim="," <<FIELD>> | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND like(distance_mm,mvindex(<<FIELD>>,2)), "NULL", distance_mm) | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND mvindex(<<FIELD>>,3)!="",distance_mm+tonumber(mvindex(<<FIELD>>,3)),distance_mm) ]   The problem is that when I looka at "All time" and graph distance_mm with timechart at the end of the search, I end up seeing empty buckets all the way back to the first event indexed in the index (even if the data in my search is not that old). If I remove the foreach section, the problem goes away. I cannot see what is happening that makes timechart show this period without data. The interesting thing is that if look at the data in "Statistics" view right before the timechart then it only shows the time period with data. It is only when the timechart command is run that the empty buckets appear.   Image of results with foreach: Image of results without foreach: Does anyone know what is going wrong here or how in the worst case to get around it? (I could use cont=false to make Splunk zoom into the area where there is data, but then I would not be able to choose "show gaps" where data is missing which is a requirement from the client.) Full search:   | tstats summariesonly=false allow_old_summaries=false avg("Extensometer.distance_mm") as distance_mm FROM datamodel=Extensometer WHERE sourcetype="EXT" BY host, sourcetype, _time span=60min | eval field="distance_mm" | lookup error-filtering.csv instrument as host field as field | foreach "case_*" [| makemv delim="," <<FIELD>> | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND like(distance_mm,mvindex(<<FIELD>>,2)), "NULL", distance_mm) | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND mvindex(<<FIELD>>,3)!="",distance_mm+tonumber(mvindex(<<FIELD>>,3)),distance_mm) ] | streamstats window=2 earliest(distance_mm) as earliest_distance_mm latest(distance_mm) as latest_distance_mm by host | eval change_distance_mm=(latest_distance_mm - earliest_distance_mm) | streamstats sum(change_distance_mm) as acc_change_distance_mm by host | timechart span=1w limit=0 eval(round(avg(acc_change_distance_mm),2)) as distance_mm by host  
hi I try to display percent in my bar chart like this but it doesnt works   | chart count as total over sig_application by sig_transaction | eval total=0 | foreach count* [ eval total=to... See more...
hi I try to display percent in my bar chart like this but it doesnt works   | chart count as total over sig_application by sig_transaction | eval total=0 | foreach count* [ eval total=total + <<FIELD>>] | foreach count* [ eval <<FIELD>>=round((<<FIELD>>/total)*100,1)] | fields - total   is anybody can help please?
Hello everybody, I have a report that is generated every week. I want to name the title of the report with the previous week number. I use the « action.email.reportFileName » field to choose th... See more...
Hello everybody, I have a report that is generated every week. I want to name the title of the report with the previous week number. I use the « action.email.reportFileName » field to choose the report generate name For example :. We are the 2022/02/11  which is 6th week of the year. The report is scheduled today but I want to mention the W-1 week -> so the number 5. I identified that with the variable %V I can dynamically generate the name of the report with the current week. I'm looking for a trick to put the number of the past week If someone has a solution please Kind regards !
Hello, I have a question about the Splunk Add-on for Crowdstrike FDR developed by Splunk - I would like to filter out events in addition to what the add-on provides - that is filtering by event_sim... See more...
Hello, I have a question about the Splunk Add-on for Crowdstrike FDR developed by Splunk - I would like to filter out events in addition to what the add-on provides - that is filtering by event_simpleName. My exact use case is I want to drop events with IsOnRemovableDisk\"\:\"1 in the raw message. I tried to do it using props/transforms applying to the appropriate sourcetype, yet it does not seem to be applied at all. Even with such a simple config like this: props.conf:   [crowdstrike:events:sensor] TRANSFORMS-usb = do_not_index   transforms.conf:   [do_not_index] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Where I expected all the events to be dropped, it does not get applied and all the events except what is configured with the Event Filter in the add-on are ingested into Splunk. Am I missing anything there? Is it even possible to filter events more in detail with Splunk Add-on for Crowdstrike FDR based on the raw data of events?
I have 1 Splunk server. It is search head, indexer and deployment server. I have sysmon and splunk universal forwarder installed on my clients. I also have Splunk_TA_microsoft_sysmon installed under ... See more...
I have 1 Splunk server. It is search head, indexer and deployment server. I have sysmon and splunk universal forwarder installed on my clients. I also have Splunk_TA_microsoft_sysmon installed under /opt/splunk/etc/apps. The app is installed on client. The sysmon client logs are getting to indexer but they are going to main index. I want to change this to the sysmon index (newly created). I have tried creating a /local/inputs.conf file on deployment server with the index = sysmon [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = 1 index = sysmon I expected it to change the  inputs.conf of the client side, but that never happens. It seems as thought the client is honoring another .conf file. I am not sure what I am missing. Any advise would be appreciated.
We have a couple of processes that runs regularly and I want to capture the errors and groups them run wise and date wise. I tried with transactions but its not splitting run wise and gave all the er... See more...
We have a couple of processes that runs regularly and I want to capture the errors and groups them run wise and date wise. I tried with transactions but its not splitting run wise and gave all the errors in the same group. Please help thanks LogDate = "01/28/2022 03:00:47.417" , LogNo = "133" , LogLevel = "INFO" , LogType = "Bot End" , LogMessage = "Logger Session Stopped; Total run time: 0:17:22.002" , TimeTaken = "0:00:00.500" , ProcessName = "FARollforward" , TaskName = "Logger" , RPAEnvironment = "PROD" , LogId = "0133010____120220128030047417" , MachineName = "xxxxx" , User = "xxxxxx" LogDate = "01/28/2022 03:00:38.679" , LogNo = "125" , LogLevel = "ERROR" , LogType = "Process Level" , LogMessage = "EXCEPTION: CustomSubTaskError;" , TimeTaken = "0:00:00.005" , ProcessName = "FARollforward" , TaskName = "NavigateOracle" , RPAEnvironment = "PROD" , LogId = "0125010____120220128030038679" , MachineName = "xxxxx" , User = "xxxxxx" LogDate = "01/28/2022 01:01:47.004" , LogNo = "51" , LogLevel = "ERROR" , LogType = "Process Level" , LogMessage = "EXCEPTION: Unable to perform LEFTCLICK action. , TimeTaken = "0:00:00.017" , ProcessName = "FARollforward" , TaskName = "FARollforward-NavigateOracle" , RPAEnvironment = "PROD" , LogId = "0051010____120220128010147004" , MachineName = "xxxxxxx" , User = "xxxxxx" LogDate = "01/27/2022 23:59:20.534" , LogNo = "1" , LogLevel = "INFO" , LogType = "Bot Start" , LogMessage = "Logger Session Started" , TimeTaken = "0:00:00.000" , ProcessName = "FARollforward" , TaskName = "Logger" , RPAEnvironment = "PROD" , LogId = "0001010____120220127235920534" , MachineName = "xxxxxx" , User = "xxxxx"   ProcessName Errors Date FARollForward EXCEPTION: CustomSubTaskError EXCEPTION: Unable to perform LEFTCLICK action 01/28/2022 Cp EXCEPTION: CustomSubTaskError EXCEPTION: Unable to perform LEFTCLICK action Exception: Failed 02/07/2022 FARollForward EXCEPTION: CustomSubTaskError EXCEPTION: Unable to perform LEFTCLICK action 02/08/2022  
Hello everyone, I'm going to try to be clear with what I'm trying to do. I did an search that list some computer with different criticity level and owner like this Owner IP Risk CVE... See more...
Hello everyone, I'm going to try to be clear with what I'm trying to do. I did an search that list some computer with different criticity level and owner like this Owner IP Risk CVE User A 10.10.10.10 Critical xxxxx User B 10.10.10.11 Critical xxxxx   I set an alert that trig for each result if "owner > 0" and I send an email on the IT support with the pdf result search attached. Now an alert is send for each Owner by IP machine it's fine but tha attached pdf on the email containt the all result and I would like that the pdf containt uniqulely the result that concern the owner so like this.  Need I to filtre the search by owner and create an alert for each Owner ?  Owner IP Risk CVE User A 10.10.10.10 Critical xxxxxx   Don't know if it's very clear Regards,
I have 2 Dashboards and the second dashboard is a drill-down for the 1st one. Everything is working as expected but in the second dashboard, the post-processing search is not working. I want to hide... See more...
I have 2 Dashboards and the second dashboard is a drill-down for the 1st one. Everything is working as expected but in the second dashboard, the post-processing search is not working. I want to hide the rows if any of the panels in that row has 0 as output.I tried many ways but not working. I'm pasting the code for 2 dashboards here, please let me know what is missing. Thanks for the help Dashboard: 1 has this drill down: <drilldown> <link target="_blank">/app/search/business_detailed?form.time_second_dashboard.earliest=$field1.earliest$&amp;form.time_second_dashboard.latest=$field1.latest$&amp;form.environment=$env$&amp;form.task=$click.value$</link> </drilldown>   Dashboard 2 : Code   <form> <label>business_detailed</label> <fieldset submitButton="false"> <input type="time" token="time_second_dashboard"> <label>Select Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="environment"> <label>Environment</label> <choice value="&quot;UAT&quot;">UAT</choice> <choice value="&quot;PROD&quot;">PROD</choice> <fieldForLabel>env</fieldForLabel> <fieldForValue>env</fieldForValue> </input> <input type="dropdown" token="task"> <label>BOT Process</label> </input> </fieldset> <row > <panel rejects= "$panel_show$" > <single> <search> <query>| makeresults |eval bot="cp Main"|table bot</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> <panel rejects= "$panel_show$" > <single> <title>Total Runs</title> <search> <query>index = "abc" env =$environment$ LogType = "*" TaskName = $task$-Main | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, count(eval(LogMessage = "END: cp-Main execution")) as Success_Count1 |eval tot_count= Failed_Count + Success_Count1|table tot_count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <progress> <condition match="$job.resultCount$ == 0"> <set token="panel_show">true</set> </condition> <condition> <unset token="panel_show"></unset> </condition> </progress> </search> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="link.visible">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xdc4e41"]</option> <option name="rangeValues">[0,100000000]</option> <option name="refresh.display">progressbar</option> <option name="refresh.link.visible">1</option> <option name="useColors">1</option> </single> </panel> <panel rejects="$panel_show$"> <single> <title>Process completed Successfully</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main LogMessage= "END: cp-Main execution" | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[1000000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel rejects="$panel_show$"> <single> <title>Process completed with Error</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main "FATAL: process ended errorneously"| eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0xdc4e41","0xdc4e41"]</option> <option name="rangeValues">[10000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel rejects="$panel_show$"> <single> <title>Success Percent</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName =$task$-Main | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, ,count(eval(LogMessage = "END: cp-Main execution")) as Success_Count1 | eval tot_count= Failed_Count + Success_Count1 | eval succ_per=round((Success_Count1/tot_count)*100,0)|table succ_per</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x581845","0xdc4e41"]</option> <option name="rangeValues">[100]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="unit">%</option> <option name="useColors">1</option> </single> </panel> </row> <row depends="$panel_show1$"> <panel> <single> <search> <query>| makeresults |eval bot="cp Adhoc"|table bot</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> <panel> <single> <title>Total Runs</title> <search> <query>index = "abc" env =$environment$ LogType = "*" TaskName = $task$-Main-Adhoc | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, count(eval(LogMessage = "END: process execution")) as Success_Count1 |eval tot_count= Failed_Count + Success_Count1|table tot_count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' > 0"> <set token="panel_show1">true</set> <unset token="panel_hide1"></unset> </condition> <condition> <set token="panel_hide1">true</set> <unset token="panel_show1"></unset> </condition> </progress> </search> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="link.visible">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xdc4e41"]</option> <option name="rangeValues">[0,100000000]</option> <option name="refresh.display">progressbar</option> <option name="refresh.link.visible">1</option> <option name="useColors">1</option> </single> </panel> <panel> <single> <title>Process completed Successfully</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main-Adhoc LogMessage= "END: process execution" | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[1000000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel> <single> <title>Process completed with Error</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main-Adhoc "FATAL: process ended errorneously"| eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0xdc4e41","0xdc4e41"]</option> <option name="rangeValues">[10000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel> <single> <title>Success Percent</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName =$task$-Main-Adhoc | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, ,count(eval(LogMessage = "END: process execution")) as Success_Count1 | eval tot_count= Failed_Count + Success_Count1 | eval succ_per=round((Success_Count1/tot_count)*100,0)|table succ_per</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x581845","0xdc4e41"]</option> <option name="rangeValues">[100]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="unit">%</option> <option name="useColors">1</option> </single> </panel> </row> </form>   In the second Dashboard I want to hide the entire row if any Total Runs panel has 0 as output . I tried but its not working. Is there anything messing up with the tokens from dashboard -1   I tried both depends and rejects but its not working  <progress> <condition match="$job.resultCount$ == 0"> <set token="panel_show">true</set> </condition> <condition> <unset token="panel_show"></unset> </condition> </progress>
Greetings! I need to know how I can find the most use cases trigger alerts in Splunk. is there any specific search query that can help? I need the use case name and the count of the alerts 
How to disable CBC mode and to use 3DES in universal forwarder 8089 port?
Hi,  i've configured Alert Manager App and an alert in the same app. If i check the index (index=alerts), i can find only event with sourcetype = incident_change instead of sourcetype=alert_metad... See more...
Hi,  i've configured Alert Manager App and an alert in the same app. If i check the index (index=alerts), i can find only event with sourcetype = incident_change instead of sourcetype=alert_metadata.   Anyone have the same issue?
Hi, I want to create the alert using which I could get the email notification if the count of events has crossed a particular threshold  between start of month till 15th day of month. my query is t... See more...
Hi, I want to create the alert using which I could get the email notification if the count of events has crossed a particular threshold  between start of month till 15th day of month. my query is this: index=akm_ing "xyz.ex.com" "aagkeyid":"49005" |stats count | where count > 600000 Can you please help me in how to achieve this
Im using the Splunk Add-on for Microsoft Cloud Services, and I'm trying to ingest EHs. We've setup th SP, and Enterprise app, created EHs. and configured account and inputs in addon yyyy-dd-mm 11:... See more...
Im using the Splunk Add-on for Microsoft Cloud Services, and I'm trying to ingest EHs. We've setup th SP, and Enterprise app, created EHs. and configured account and inputs in addon yyyy-dd-mm 11:36:20,813 level=INFO pid=18561 tid=MainThread logger=__main__ pos=mscs_azure_event_hub.py:_try_creating_blob_checkpoint_store:567 datainput="AzureEH" start_time= message="Blob checkpoint store not configured" yyyy-dd-mm 11:36:14,786 level=INFO pid=18427 tid=MainThread logger=splunksdc.loop pos=loop.py:is_aborted:38 datainput="AzureEH" start_time= message="Loop has been aborted."
Hi Team, We are trying to build a dashboard for the Azure PIM logs in splunk to visualize who all are elevating their admin roles in Azure and what are the activities they are performing and how of... See more...
Hi Team, We are trying to build a dashboard for the Azure PIM logs in splunk to visualize who all are elevating their admin roles in Azure and what are the activities they are performing and how often they require the role, unfortunately we are not able to filter the action in splunk. In the operations  list we couldn't identify anything related to PIM. please help with the search index index=client* sourcetype="o365:management:activity" Workload=AzureActiveDirectory action Regards, Sai