All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have to extract the sum of particular search output from my query and the same needs to be compared with previous month to date. For example, consider today is June 15th, and i have the su... See more...
Hi, I have to extract the sum of particular search output from my query and the same needs to be compared with previous month to date. For example, consider today is June 15th, and i have the sum as 150000 for last 15 days, and now i would like to get the same sum for previous month, ie., till May 1-15th using the same query. Could someone suggest on this. I have tried the eval epoch30days_ago=relative_time(now(), "-28d@d" ), but this is not giving the accurate data. Thanks
I have my search query being as such where I am displaying the tickets, flowing in and out. Now, i want to put a line indicating the backlog on my chart. index="tickets" $year$ | dedup number |... See more...
I have my search query being as such where I am displaying the tickets, flowing in and out. Now, i want to put a line indicating the backlog on my chart. index="tickets" $year$ | dedup number | convert timeformat="%Y-%m-%d %H:%M:%S" num(allFields.createdDate) As days | eval week=strftime(days,"%V") | eval year = strftime(days, "%Y") | where year= c_year | stats count by week | appendcols [search index="tickets" $year$ | dedup number | search state != "Resolved" AND state != "Closed" AND state != "Resolution Confirmed" AND assignment_group != "Out of Scope" | convert timeformat="%Y-%m-%d %H:%M:%S" num(createdDate) As date | eval weeks=strftime(date,"%V") | eval year = strftime(date, "%Y") | where year= c_year | chart count by weeks ] | appendcols [search index="tickets" $year$ | dedup number | search state = "Resolved" OR state = "Resolution Confirmed" OR state = "Closed" | convert timeformat="%Y-%m-%d %H:%M:%S" num(resolvedOn) As days | eval out = strftime(days, "%V") | eval year = strftime(days, "%Y") | where year= c_year | chart count by out] Basically, how can i make the field 'createdDate' used in first query and first subquery to be common on my chart? The way i did it, the subquery has its own axis, which i do not want. Please refer to the picture: What I am getting is this : (where weeks is my backlog) Any help will be much appreciated!
I would like to have some animation on the dashboard screens with the use of a .gif file. Does Splunk Cloud support .gif files for their dashboard App?
What if Same input is rescheduled and first one is still running.. option A -> First one stops, Second one Starts option B -> First continues, Second will be skipped option C -> Both run concu... See more...
What if Same input is rescheduled and first one is still running.. option A -> First one stops, Second one Starts option B -> First continues, Second will be skipped option C -> Both run concurrently For Example, the interval is 60 Seconds and After a minute my first one(modular input) is still running
Hi team, I have a process that get indexes daily for a certain duration. 1) i want to get the duration it gets indexed 2) I want to create a alert where daily duration should not be less than avg... See more...
Hi team, I have a process that get indexes daily for a certain duration. 1) i want to get the duration it gets indexed 2) I want to create a alert where daily duration should not be less than avg duration for a month
Hi Team, I am unable to detect few Business transactions with the custom rule with servlet and HTTP parameter as shown below. However, the same business transaction is getting detected automati... See more...
Hi Team, I am unable to detect few Business transactions with the custom rule with servlet and HTTP parameter as shown below. However, the same business transaction is getting detected automatically but not using the custom rule.  Also, other BTs are getting detected by using the same method. There are 3 URIs each for login and search and 3 are the same so I am trying to distinguish them using the HTTP parameter. And the transactions are not getting detected in live preview but get automatically discovered as transaction and not using the custom rule. Kindly help  
I installed the app CrowdStrike Falcon Intelligence Add-on on our Splunk heavy forwarder. I attempted to configure it, but the configure page doesn't load at all. When I check the browser's console, ... See more...
I installed the app CrowdStrike Falcon Intelligence Add-on on our Splunk heavy forwarder. I attempted to configure it, but the configure page doesn't load at all. When I check the browser's console, I see: External handler failed with code '1' and output: 'REST ERROR[1021]: Fail to decrypt the encrypted credential information - cannot concatenate 'str' and 'NoneType' objects'. See splunkd.log for stderr output. From splunkd.log: 06-01-2020 11:46:55.999 +0000 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/splunktaucclib/rest_handler/handler.py", line 113, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/splunktaucclib/rest_handler/handler.py", line 299, in _format_response masked = self.rest_credentials.decrypt_for_get(name, data) File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/splunktaucclib/rest_handler/credentials.py", line 184, in decrypt_for_get clear_password = self._get(name) File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/splunktaucclib/rest_handler/credentials.py", line 389, in _get string = mgr.get_password(user=context.username()) File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/solnlib/utils.py", line 154, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/solnlib/credentials.py", line 118, in get_password all_passwords = self._get_all_passwords() File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/solnlib/utils.py", line 154, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-crowdstrike_falcon_intel/bin/ta_crowdstrike_falcon_intel/solnlib/credentials.py", line 272, in _get_all_passwords clear_password += field_clear[index] TypeError: cannot concatenate 'str' and 'NoneType' objects ". See splunkd.log for more details. I tried installing the app on my local trial version of Splunk Enterprise, and the configure page loads, and I'm able to add the streaming API key and secret successfully. I tried being hacky and copying my local passwords.conf file onto the heavy forwarder server in the same path/location, and making sure the file permissions were the same, to no avail. The config page still doesn't load, and the app still isn't configured. What am I missing? (Updated: My bad, there are multiple CrowdStrike issues.)
I have a query in splunk index = * STATUS_CODE earliest=-2mon@mon latest=-1mon@mon | fields STATUS_CODE | rex field=_raw "STATUS_CODE:(?.{0,1}\d)" | eval success=if(status_code in(0,1),1,0) | t... See more...
I have a query in splunk index = * STATUS_CODE earliest=-2mon@mon latest=-1mon@mon | fields STATUS_CODE | rex field=_raw "STATUS_CODE:(?.{0,1}\d)" | eval success=if(status_code in(0,1),1,0) | timechart count as total sum(success) as success | eval success_rate=round((success/total)*100,3) | eval success_rate=success_rate + "%" | table _time success_rate | append [search index = * STATUS_CODE earliest=-1mon@mon latest=@mon | fields STATUS_CODE | rex field=_raw "STATUS_CODE:(?.{0,1}\d)" | eval success=if(status_code in(0,1),1,0) | timechart count as total sum(success) as success | eval success_rate=round((success/total)*100,3) | eval success_rate=success_rate + "%" | table _time success_rate] I want to show the single value visualization displaying the increase/decrease in success_rate, but its not displaying correctly. I mean i need to add a timechart command again but thats not working. Can anyone help
I have below 2 log files with 4 identical columns and in that, status is different: Status1.log host1,PROD,1666680,mobile1,Staging_Successful host1,PROD,1666680,mobile2,Staging_Successful host1,PR... See more...
I have below 2 log files with 4 identical columns and in that, status is different: Status1.log host1,PROD,1666680,mobile1,Staging_Successful host1,PROD,1666680,mobile2,Staging_Successful host1,PROD,1666680,mobile3,Staging_Successful Status2.log host1,PROD,1666680,mobile1,Deployment_Successful host1,PROD,1666680,mobile2,Deployment_Successful host1,PROD,1666680,mobile3,Deployment_Successful Currently, I am able to extract both files individually by using rex command. But my desire is to compare both the files and output as follows. I want to merge the status like below in table format...Please suggest how to do this. host1,PROD,1666680,mobile1,Staging_Successful,Deployment_Successful host1,PROD,1666680,mobile2,Staging_Successful,Deployment_Successful host1,PROD,1666680,mobile3,Staging_Successful,Deployment_Successful
I have a query in splunk index = * STATUS_CODE earliest=-2mon@mon latest=-1mon@mon | fields STATUS_CODE | rex field=_raw "STATUS_CODE:(?.{0,1}\d)" | eval success=if(status_code in(0,1),1,0) | t... See more...
I have a query in splunk index = * STATUS_CODE earliest=-2mon@mon latest=-1mon@mon | fields STATUS_CODE | rex field=_raw "STATUS_CODE:(?.{0,1}\d)" | eval success=if(status_code in(0,1),1,0) | timechart count as total sum(success) as success | eval success_rate=round((success/total)*100,3) | eval success_rate=success_rate + "%" | table _time success_rate | append [search index = * STATUS_CODE earliest=-1mon@mon latest=@mon | fields STATUS_CODE | rex field=_raw "STATUS_CODE:(?.{0,1}\d)" | eval success=if(status_code in(0,1),1,0) | timechart count as total sum(success) as success | eval success_rate=round((success/total)*100,3) | eval success_rate=success_rate + "%" | table _time success_rate] I want to show the single value visualization displaying the increase/decrease in success_rate, but its not displaying correctly. I mean i need to add a timechart command again but thats not working. Can anyone help
Hi, Could someone help on the below requirement. I have the index as sampleindex and which returns the one of the output called environment_name as app1_dev, app1_tst, app1_prd, app... See more...
Hi, Could someone help on the below requirement. I have the index as sampleindex and which returns the one of the output called environment_name as app1_dev, app1_tst, app1_prd, app2_dev, app2_tst, app2_prd, app2,dev, app3_tst, app3_prd. along with few more outputs, i have to bring the timechart with environment_name and cost on given duration. Here I am not interested on all environment, for example i need to get the result only on app1_dev, app2_dev and we don't want to show app3_dev when i am selecting the Environment name as "dev". Same applies for all environment. And the above given environment combination is just an sample, i have around 10+ environment(dev, int, tst, prd, etc...) with combination of 10 application (app1, app2, app3, etc..,) I have to use Dropdown as Input field and choice as , to group all the environment type as below. <label>Environment</label> <choice value="*dev">dev</choice> <choice value="*tst">tst</choice> <choice value="*prd">prd</choice> <default>*prd</default> </input> When i select the value as "dev", the chart should show the output as app1_dev, app2_dev and app2_dev on chart . Could someone help on how to query this. I have tried using case statement, but which returns as sum based on environment namings what i selecting from dropdown.. eval namespace=case(match(environment_name,"app1-dev"),"dev", match(environment_name,"app2-dev"),"dev", match(environment_name,"app3-dev"),"dev") Re phrased the question again.
Hi Everyone, I would like to list all the alerts that are setup by users not by splunk apps like ITSI/DMC using REST API. Please help me. I used below queries, but did not give proper result... See more...
Hi Everyone, I would like to list all the alerts that are setup by users not by splunk apps like ITSI/DMC using REST API. Please help me. I used below queries, but did not give proper results. | rest /services/saved/searches | search title=*| rename title AS "Title", description AS "Description", alert_threshold AS "Threshold", cron_schedule AS "Cron Schedule", search AS "Search", action.email.to AS "Email" ,alert_comparator AS "Comparison", dispatch.earliest_time AS "frequency", alert.severity AS "SEV" ,author AS "Author" ,disabled AS "Disabled-True"| eval Severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | table Title, Description, Threshold, Comparison, "Cron Schedule", frequency, Severity,Search, Email,Author,Disabled-True | rest /services/alerts/fired_alerts/ |rest /servicesNS/admin/-/alerts/alert_actions |rest/servicesNS/-/-/saved/searches | search alert.track=1 | fields title description search disabled triggered_alert_count actions action.script.filename alert.severity cron_schedule
Hi - having issues with a Windows UF we are having to restart circa weekly to clear the issue below which happens at random times (the parsingQueue error being the first in the chain); the TcpOutProc... See more...
Hi - having issues with a Windows UF we are having to restart circa weekly to clear the issue below which happens at random times (the parsingQueue error being the first in the chain); the TcpOutProc errors continue until the UF is restarted. The amount of data being sent [hourly, on the hour] is very small. Is this issue with the Forwarder or with the remote Splunk indexer, the forwarder seems to work OK at all other times ? NB : 'Phone Home' msgs removed. I can't see this exact scenario in other related Splunk Qs. MANY THANKS!! "05-14-2020 14:45:37.371 +0100 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 300 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data "05-14-2020 14:43:57.048 +0100 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 200 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data"05-14-2020 14:43:40.009 "05-14-2020 14:43:22.638 +0100 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...","2020-05-14T14:43:22.638+0100",PRDU0000001,"_internal",1,"C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log",splunkd
Looking for a step to step guide to integrate skybox with Splunk
I am looking for the configuration document to get the logs from Zscaler to splunk.
Hi Team, Link to search on a new tab for raw events when we click on a particular value in the line chart? Is it possible?
So I have a log with multiple VPN connection, and some of them reconnect to the same session multiple times a day for example: 08:02:00- User A login 08:10:12- User A login, replace old connecti... See more...
So I have a log with multiple VPN connection, and some of them reconnect to the same session multiple times a day for example: 08:02:00- User A login 08:10:12- User A login, replace old connection 08:12:13- User A login, replace old connection 08:15:13- User A logout, disconnected when I use transaction , splunk only get the events at 08:15:13 and 08:12:13 , but I want it to get the earliest event at 08:02:00, are there any way to achieve that ?
my data Name spent income A 10 20 B 20 40 C 30 60 A 40 80 B 50 100 Outcome have to come Name spent... See more...
my data Name spent income A 10 20 B 20 40 C 30 60 A 40 80 B 50 100 Outcome have to come Name spent income A 50 100 B 70 140 C 30 60
Hello I'm running this query: index=prod eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback... See more...
Hello I'm running this query: index=prod eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback-net" OR eventtype="csm-messages-dhcpd-write-zero-leases" OR eventtype="csm-messages-dhcpd-eth1-nosubnet-declared" | transaction maxpause=2s maxspan=1s maxevents=5 | eval max_time=(duration + _time) | eval min_time=(_time) | rename kafka_uuid as uuids | where eventcount!=5 | table eventtype ,min_time, max_time,tail_id,uuids | eval eventtype="csm_dhcp_anomaly" now i have csv file and i want to read the "eventtype" parameters from there. how can i call inputlookup ? all of my tries didn't work.. thanks
I want to setup my splunk forwarders on linux machines to restart automatically after the linux machines are rebooted. Is it possible? If so then what is the process?