All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all,   Looking for some help with a perfmon search.  index=perfmon host=myhost01s* sourcetype="PerfmonMk:LogicalDisk"  instance=_total | timechart sum(Disk_Transfers/sec) span=90s   This ... See more...
Hello all,   Looking for some help with a perfmon search.  index=perfmon host=myhost01s* sourcetype="PerfmonMk:LogicalDisk"  instance=_total | timechart sum(Disk_Transfers/sec) span=90s   This gives me all the IO of the hosts, but it only seems accurate if I chart it to 90s span, which is how often it queries. If I change that then the sum obviously just puts all the values together. What I want to do is show longer periods of time, but still have it counting the sum of each reading (like a bucket) over time and not adding all numbers over the whole period (span) together. I want to force it to always use the time as the bucket. Any help is much appreciated.
What Splunk server should contain the lookup tables for all servers to use?
I have installed Netskope add on for Splunk on Splunk cloud, but it is still in the loading state and the configuration screen does not appear. When I looked at the messages, I got the following two... See more...
I have installed Netskope add on for Splunk on Splunk cloud, but it is still in the loading state and the configuration screen does not appear. When I looked at the messages, I got the following two messages. I'm trying the trial version, how can I ask for support? Is it possible to contact support for the trial version? Is there any way to reboot by myself? (1) "Splunk must be restarted. "Splunk must be restarted for changes to take effect. Contact Splunk Cloud Support to complete the restart. 2) 'User 'sc_admin' triggered the 'create' action on app 'TA-NetSkopeAppForSplunk', and the following objects required a restart: checklist, eventgen'
Ive seen splunk.clilib used in numerous Splunk apps, however I cant find any documentation online for this. Are there any docs available?
I am trying to create a panel on an existing dashboard that will just display figuratively the total number of alarms. the alarm field name is called alarmedObj I ned to sum it to give a total of all... See more...
I am trying to create a panel on an existing dashboard that will just display figuratively the total number of alarms. the alarm field name is called alarmedObj I ned to sum it to give a total of all the alarms. please can anyone help out with a search query from for this.  
I have two different sourcetypes src_a, src_b. src_a: This is a CSV uploaded from Server (has expected results for each event) and data has not changes since october so there was no upload after tha... See more...
I have two different sourcetypes src_a, src_b. src_a: This is a CSV uploaded from Server (has expected results for each event) and data has not changes since october so there was no upload after that src_b: we have data for daily result count for each event   I want to compare src_a (last updated data received) to src_b (last 3 days) and show variance.  Please help
Hi Splunkers, I'm working on Splunk and ServiceNow integration. Where ServiceNow team wants pull a report from Splunk through REST API.  Can anyone suggest me how to do it?  TIA.
 I  have asked to Add the new Linux HFs to the forwarding configurations and ensure that logs are passing through them  how to verify the condition is met or not 
We are logging one application deployed in Kubernetes and ingesting its tomcat localhost access logs in Splunk via HEC (HF). I've pushed the props.conf and transforms.conf on HF as well as on indexe... See more...
We are logging one application deployed in Kubernetes and ingesting its tomcat localhost access logs in Splunk via HEC (HF). I've pushed the props.conf and transforms.conf on HF as well as on indexers through Indexer Master but the problem is that extractions as well as transforms are not working et all. Sample log: 10.1.0.225 - - [12/Mar/2021:13:39:51 +0000] "PUT /outlookaddin/v1/edap/sessions HTTP/1.1" 200 25 =============================== props.conf [tomcat:localhost] NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true ANNOTATE_PUNCT = false SHOULD_LINEMERGE = false TIME_PREFIX=\[ TIME_FORMAT = %d/%b/%Y:%H:%M:%S %z LINE_BREAKER = ([\r\n]+)\d+\.\d+.\d+\.\d+ TRUNCATE = 0 EXTRACT-access =^(?P<ip>[^\s]+)\s(?P<indent>(-|\w+))\s(?P<user>(-|\w+))\s\[(?<req_time>[^\]]+)\]\s\"(?P<method>\w+)\s(?P<request_uri>[\S]+)\s(?P<protocol>[^\"]+)\"\s(?P<status>\d{3})\s(?P<bytes_sent>(?:\d+|-)) FIELDALIAS-bytes_in = bytes_sent AS bytes_in FIELDALIAS-http_method = method AS http_method FIELDALIAS-uri_query = request_uri AS uri_query FIELDALIAS-ip = ip AS src EVAL-bytes_in = if(bytes_in=="-", 0, bytes_in) EVAL-bytes_sent = if(bytes_sent=="-", 0, bytes_sent) EVAL-vendor_product = "Apache Tomcat" EVAL-product_family = "Apache Foundation Software" EVAL-bytes = coalesce(bytes_in, 0)+coalesce(bytes_out, 0) FIELDALIAS-response_code = status AS response_code TRANSFORMS-anonymize=token-anonymizer ========================================== transforms.conf [token-anonymizer] REGEX = (?m)^(.*accessToken\=).+(tokenType.*refreshToken=).+(expiresInSeconds.*username\=)\w+(.+ParamKey-dimensions-ParamKey\-).*(ParamKey.+) FORMAT = $1######&$2######&$3######&$4#######$5 DEST_KEY = _raw  
How can I forward all my data that are sent to Splunk to a different IP address or device?
I am looking for a search that returns an events(s) when the searched value remains for a set length of time. Using Windows performance monitoring searches I am searching for CPU "Peak" values but wo... See more...
I am looking for a search that returns an events(s) when the searched value remains for a set length of time. Using Windows performance monitoring searches I am searching for CPU "Peak" values but would like to add a time condition for length. Example:  eventtype="perfmon_windows" (Host="hostname") object="Processor" counter="% Processor Time" instance="*" | stats sparkline(avg(Value)) as Trend avg(Value) as Average, max(Value) as Peak, latest(Value) as Current, latest(_time) as "Last Updated" by Host | convert ctime("Last Updated") | sort - Current | eval Average=round(Average, 2) | eval Peak=round(Peak, 2) | eval Current=round(Current, 2) | where Peak>85  Right now that search works very well but if I get a quick spike of CPU it will go off. I would like to add something like when the CPU hits that Peak of 85 for something like 5 minutes or more. 
Hello, I am getting the following error while searching in splunk. Could not load lookup=LOOKUP-cisco_pix_severity_lookup Could not load lookup=LOOKUP-citrix_netscaler_availability_status Could ... See more...
Hello, I am getting the following error while searching in splunk. Could not load lookup=LOOKUP-cisco_pix_severity_lookup Could not load lookup=LOOKUP-citrix_netscaler_availability_status Could not load lookup=LOOKUP-citrix_netscaler_ha_states Could not load lookup=LOOKUP-f5_icontrol_availability_status Could not load lookup=LOOKUP-f5_icontrol_ha_states I copied the apps from another splunk deployer and now I getting these errors. I see the lookup csv files are there but still the error persists. Am I missing something? Please advise. Thanks
I have a datasource that drops data into Splunk every 10 minutes that contains data about my team's workflow. The data in Splunk looks something like this 10:00am "Priority" = 10 10:00am "Normal" ... See more...
I have a datasource that drops data into Splunk every 10 minutes that contains data about my team's workflow. The data in Splunk looks something like this 10:00am "Priority" = 10 10:00am "Normal" = 100 10:10am "Priority" = 8 10:10am "Normal" = 102 10:20am "Priority" = 12 10:20am "Normal" = 95 etc. I want to create a table that looks like this: Priority Type, Tickets Now, Tickets 1 hour ago, Tickets 24 hours ago Normal,95,100,103 Priority,12,,10,8   My search is:   index="data" source="log" earliest=-12hr TicketPriority= "Normal" OR "Priority" | bin field3 span=10m | dedup TicketPriority | stats values(field3) by TicketPriority   How can I get it to add the numbers of tickets from 1 hour ago and 24 hours ago?
hi i am having issues where some of the sourcetypes are not getting data in splunk from LA,  upon checking some logs i can see below : 2021-03-11 13:07:39,577 ERROR pid=13031 tid=MainThread file=ba... See more...
hi i am having issues where some of the sourcetypes are not getting data in splunk from LA,  upon checking some logs i can see below : 2021-03-11 13:07:39,577 ERROR pid=13031 tid=MainThread file=base_modinput.py:log_error:307 | OMSInputName="MyInput" status="400" step="Post Query" response="{"error":{"message":"Response size too large","code":"ResponseSizeError","correlationId":"XXX","innererror":{"code":"ResponseSizeError","message":"Maximum response size of 67108864 bytes exceeded. Actual response Size is 73664723 bytes."}}}" 2021-03-11 13:07:39,577 ERROR pid=13031 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events input_module.collect_events(self, ew) File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 86, in collect_events for i in range(len(data["tables"][0]["rows"])): UnboundLocalError: local variable 'data' referenced before assignment 2021-03-11 13:08:18,608 ERROR pid=13216 tid=MainThread file=base_modinput.py:log_error:307 | OMSInputName="MyInput2" status="400" step="Post Query" response="{"error":{"message":"Response size too large","code":"ResponseSizeError","correlationId":"XXX","innererror":{"code":"ResponseSizeError","message":"Maximum response size of 67108864 bytes exceeded. Actual response Size is 73136457 bytes."}}}" 2021-03-11 13:08:18,608 ERROR pid=13216 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events input_module.collect_events(self, ew) File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 86, in collect_events for i in range(len(data["tables"][0]["rows"])): UnboundLocalError: local variable 'data' referenced before assignment Am i hitting any limitation ? if so, any way to overcome this ? any suggestions appreciated... @jkat54 
I'm using Splunk to examine the event logs on some servers looking for details regarding application crashes with the following search:   index=main "ORA-"   This search returns a "Message" fi... See more...
I'm using Splunk to examine the event logs on some servers looking for details regarding application crashes with the following search:   index=main "ORA-"   This search returns a "Message" field that contains text which begins like this:   tman.oci.exe.42636 (trace:0) (DbDmlStmtHandle::Execute): Error[343] -> Database access error (-1). Msg: [ ORA-00001: unique constraint (UCICOBG.IXCTPROFILEUNIQUEID) violated ] .13808 (trace:0) (DBReopenDatabase(connection lost)): Error[343] -> Database access error (-3113). Msg: [ ORA-03113: end-of-file on communication channel Process ID: 0   I'm trying to extract a field with just the application name information in it (in this case "ORA-0001", "ORA-03113) I stopped at that expression (ORA- [0-9]. *), How can I use rex to filter just that field?
I would like to view my ITSI dashboards on a TV or Mobile device, is ITSI supported?
Hello Splunkers, My search executes monthly, over a period of 3 months data, since march is going on my last 3 months would be Dec, Jan and Feb. Now in the Trend column, i need the difference of Pre... See more...
Hello Splunkers, My search executes monthly, over a period of 3 months data, since march is going on my last 3 months would be Dec, Jan and Feb. Now in the Trend column, i need the difference of Previous 2 months by Priority. As the month succeeds, the column name would also change.     Also, can i show the difference with pictorial view? Like for High, it should be downward arrow as percentage decreased by 3, for Medium it should be linear arrow, for Low it should be upward arrow, as it increased by 1%. Thanks in Advance for your time Cheers,  
How to setup retention policies for indexes?
How to parse data in Splunk befor indexing?
Hello, Last week I started with TrackMe App and so far I'm really impressed with all prebuild functionality. In the last days I was going through configurations step by step and applied them on dat... See more...
Hello, Last week I started with TrackMe App and so far I'm really impressed with all prebuild functionality. In the last days I was going through configurations step by step and applied them on data. Today I found some alerts due to outliers in sourcetypes, my problem is that in some cases I don't understand, why the eventcount in the outlierdetection got that high, because searching for index data in that time range is telling me everything is normal and the count is not that high as "detected".   Below is the detected outlier with a count of 22: But indexed data is still at an eventcount of 1: Where is the count of 22 coming from? How to investigate on this, is there something that I maybe configured the wrong way?     Many thanks and happy splunking, Sara