All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way to specify a timezone in a datanmodel? I have an eval field called date relying on Splunk's _time field but I want to ensure that it matches a specific timezone, rather than relying o... See more...
Is there a way to specify a timezone in a datanmodel? I have an eval field called date relying on Splunk's _time field but I want to ensure that it matches a specific timezone, rather than relying on the extracted _time of the log as its in UTC. I want to have the timezone match Brisbane, Australia (+10)
Hello,  I just installed a new instance of Splunk Enterprise 8.2.1with Cisco ISE add-on module 4.1.0.  Nothing else.  Per documentation, I should see a Setup action for the ISE add-on.  But I don’t. ... See more...
Hello,  I just installed a new instance of Splunk Enterprise 8.2.1with Cisco ISE add-on module 4.1.0.  Nothing else.  Per documentation, I should see a Setup action for the ISE add-on.  But I don’t.   Any ideas on what I missed?  Really, I haven’t configured anything else.   I made sure I am logged into Splunk as an administrator.   Thanks, Jerry
I have setup the Graph API input for AuditSignIn.Logs and logs are not consistent and missing in splunk randomly. Getting this error in logs: 2021-07-22 15:21:56,991 level=ERROR pid=8208 tid=MainTh... See more...
I have setup the Graph API input for AuditSignIn.Logs and logs are not consistent and missing in splunk randomly. Getting this error in logs: 2021-07-22 15:21:56,991 level=ERROR pid=8208 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=utils.py:wrapper:72 | datainput=b'SignInLogs' start_time=1626991803 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api.py", line 235, in run return consumer.run() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api.py", line 114, in run self._ingest(message, source) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api.py", line 124, in _ingest self._event_writer.write_event(message.data, source=source) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/event_writer.py", line 161, in write_event self._write(data) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/event_writer.py", line 145, in _write self._dev.write(data) BrokenPipeError: [Errno 32] Broken pipe Any help?
I have the data with different event types in the data say A to M.. Wanted to find time diffrence which tookfor each event Example index=apple source=datapipe eventType=newyork                    ... See more...
I have the data with different event types in the data say A to M.. Wanted to find time diffrence which tookfor each event Example index=apple source=datapipe eventType=newyork                                   A eventType=california                                  B     B-A eventType=boston                                       C    C-B eventType=houston                                    D    D-C eventType=dallas                                        E      E-D eventType=austin                                        F     F-D eventType=Irvine                                         G    G-E eventType=Washington                            H    H-F eventType=Atlanta                                      I        I-H eventType=San Antonio                          J         J-I eventType=Brazil                                       K          K-I eventType=Mumbai                                   L       L-I eventType=Delhi                                        M        M-I Currently I'm using |streamstats range(_time) as diff window=2 ..however it gives the sequential order of the difference. I want the difference in time in the above format The eventTypes are Unique and I'm using append in my search in each eventType @sundareshr @ITWhisperer @Nisha18789 @MuS @jasongb @yuanliu @thetech @guilmxm  Thank you
Hello, I have 2 CSV lookups updating several times a day.  One (A) is from CMDB with the entire list of assets (hostname, ip, user, os, etc).  The other (B) is a list of installed clients for some p... See more...
Hello, I have 2 CSV lookups updating several times a day.  One (A) is from CMDB with the entire list of assets (hostname, ip, user, os, etc).  The other (B) is a list of installed clients for some product, also containing the hostname.  I would like to get a search/dashboard that lists hosts in A that are not found in B with some of  additional fields.  Have no found a way to do with with 2 lookups, any ideas?  Thanks! Lookup CSV A: Host1, Host2, Host3 Lookup CSV B: Host1, Host3 Search output: Host2  
I'm trying work with a bunch of system logs that are either ERROR or INFO logs. Each has a unique id # that is specific to a certain package. I'm trying to figure out a way to count how my these uni... See more...
I'm trying work with a bunch of system logs that are either ERROR or INFO logs. Each has a unique id # that is specific to a certain package. I'm trying to figure out a way to count how my these unique id #s are only present in INFO logs meaning that there was no issues associated with that id #.  There are multiple logs associated with each ID# so if that id# is in 5 INFO logs but 1 ERROR logs, it shouldn't be counted. But if it's in only 1 INFO log, that should be counted. I'm novice with Splunk and I need to figure this out for my internship ASAP so all help is appreciated.  Thanks!      
Hello, I hope you can help me to figure out what is going on. I have a distributed environment, a search head and two indexers.  I've recently upgraded to Splunk 8.1.3 from 7.3. But one of my two ... See more...
Hello, I hope you can help me to figure out what is going on. I have a distributed environment, a search head and two indexers.  I've recently upgraded to Splunk 8.1.3 from 7.3. But one of my two indexers its not working properly, the splunkd service is taking all the CPU and memory resources...  now the server its painfully slow... The search head I''m seeing messages like this: - The percentage of non high priority searches delayed (50%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=8065. Total delayed Searches=4070 - TCPOutAutoLB-0 Errors        
Hi,  I develop the app "Allkun MON for ISO8583" and published on the splunkbase some weeks ago. Now I'm doing development validations for an upgrade. I ran a scan with the app Python Upgrade Readine... See more...
Hi,  I develop the app "Allkun MON for ISO8583" and published on the splunkbase some weeks ago. Now I'm doing development validations for an upgrade. I ran a scan with the app Python Upgrade Readiness and this report ends with fail (not compatible with python3) but the application don't use python script and has no bin folder. How could this validation pass? Regards
Hi Team - I am trying to first search and  then aggregate results from following Splunk logs: Raw format:  "buildDimensionsAttributes: $attribute: $constraint: $result" sample message: message: ... See more...
Hi Team - I am trying to first search and  then aggregate results from following Splunk logs: Raw format:  "buildDimensionsAttributes: $attribute: $constraint: $result" sample message: message: buildDimensionsAttributes: 6393: AttributeConstraints(-1.0,99.92,2,DoubleFormat): 99.98 Here in the AttributeConstraints 1st index corresponds to minval here -1.0 2nd index corresponds to maxval here 99.92 3rd index corresponds to decimal here 2 I want to first filter $results which are out of range, here 99.98 is not between  [-1.0 , 99.92] and then aggregate (group by) various $attribute and then showcase something like below on the dashboard where we can apply our usual time filters. Attribute# | RecrdCountofOutofRange | TotalRecords Thanks AG
Hi, I have seen the dashboard which is running in Splunk but available publicly. https://covid-19.splunkforgood.com/coronavirus__covid_19_ I got the app and its source codes from the github. https... See more...
Hi, I have seen the dashboard which is running in Splunk but available publicly. https://covid-19.splunkforgood.com/coronavirus__covid_19_ I got the app and its source codes from the github. https://github.com/splunk/corona_virus I would like to on how the dashboard is available publicly and how the searches are running when we run this dashboard. Because it does need the authentication to view the dashboard and what happens when lot of people run this dashboard at the same time. #splunk4good
Hi! I am trying to setup an alert that triggers Jenkins job when the condition is met. In order to trigger Jenkins job I have to supply user/pwd in POST request. I am not sure if this is supported in... See more...
Hi! I am trying to setup an alert that triggers Jenkins job when the condition is met. In order to trigger Jenkins job I have to supply user/pwd in POST request. I am not sure if this is supported in Splunk Enterprise Version:8.0.5.1? 
I just installed the add on and got the java set up and I actually have jmx data coming into the main index, but I am not able to see jmx under the settings >> data inputs >> jmx   I would like to ... See more...
I just installed the add on and got the java set up and I actually have jmx data coming into the main index, but I am not able to see jmx under the settings >> data inputs >> jmx   I would like to have the data going to another index, but can not find out how to do this. here is the output of my print-modinput-config:  /opt/splunk/bin/splunk cmd splunkd print-modinput-config jmx <?xml version="1.0" encoding="UTF-8"?> <input> <server_host>SRVP01SPLUNK-01</server_host> <server_uri>https://127.0.0.1:8089</server_uri> <session_key>n0Zfn422VQQDkWH_MV^wkRCj3Zy_2yZVD^WYBSx84i69_3g2f^Ylatg_Mb^OOhhY0iodEKMOgZer23LjMRt5vlr5342o8g1uCDeQ73rYU6lRZw^Wfo</session_key> <checkpoint_dir>/opt/data/splunk/modinputs/jmx</checkpoint_dir> <configuration> <stanza name="jmx://_Splunk_TA_jmx_:mirth_poc" app="Splunk_TA_jmx"> <param name="config_file">_Splunk_TA_jmx.Splunk_TA_jmx.mirth_poc.xml</param> <param name="config_file_dir">etc/apps/Splunk_TA_jmx/local/config</param> <param name="disabled">0</param> <param name="host">$decideOnStartup</param> <param name="index">jmx_mirth</param> <param name="interval">30</param> <param name="polling_frequency">60</param> <param name="python.version">python3</param> <param name="sourcetype">jmx</param> <param name="start_by_shell">false</param> </stanza> </configuration> </input>  
Hi all, I have one field that simply shows that latest timestamp of logs. i) I was wondering how can I find the difference between the latest log time and the current system time? ii) Then with th... See more...
Hi all, I have one field that simply shows that latest timestamp of logs. i) I was wondering how can I find the difference between the latest log time and the current system time? ii) Then with that value I was hoping to check run a condition and print the condition to another field (e.g. timeliness). The condition I wanted to implemented was that if the difference is greater than 3 hours, then input "Out of Sync" in the timeliness field. However, in the alternative case, enter "Synced" in the timeliness field.  My latest log time is in the following format:  2021-07-23 02:54:09 Any help would be greatly appreciated!
I need to provide HA & better performance in MC for the Enterprise Console (ES) what health check items in MC or DMC do you recommend. Thank u in advance.
Hi all, I have a dropdown field that is used to filter the results of a pivot table. Is there a way that I can show and hide a column in the pivot table? For instance, say the token of the dropdown... See more...
Hi all, I have a dropdown field that is used to filter the results of a pivot table. Is there a way that I can show and hide a column in the pivot table? For instance, say the token of the dropdown field is 'select_field_1' ('version' and 'daysRemaining' are columns) Id imagine there is a conditional command where you can do if ($select_field_1|s$=certificate,show daysRemaining hide version) Any help would be greatly appreciated!
FYI -- Red marked URLs from the attached image should be remove from the output of splunk query which I shared below ..Please someone help for the same. Query used  in environment ===============... See more...
FYI -- Red marked URLs from the attached image should be remove from the output of splunk query which I shared below ..Please someone help for the same. Query used  in environment ===================== index=claims_pd env=pd_cloud_e sourcetype=claims:cif:ibuapps "https://" NOT "*.gco.net" NOT "*.gcoddc.net" NOT "*gco.net" | rex field=_raw "(?<externalURL>https:\/\/.[^\s]+)" | stats values(externalURL) as externalURL,list(ResponseMessage) as ResponseMessage, count by ServiceName | sort 0 - count | dedup externalURL |append [search sourcetype=claims:cif:ibuapps "javax.net.ssl.SSLException" OR "javax.net.ssl.SSLHandshakeException" OR "Unable to tunnel through proxy" OR "HTTP response '400: Bad Request'" OR "(504)Gateway Timeout" OR "Access is denied" AND (ServiceName OR (doFinally AND "method:handleErrorResponse")) | stats list(ResponseMessage) as ResponseMessage, count by ServiceName | sort - count | return ResponseMessage]
Scenario: 3 node SHC behind okta auth Suppose you have a URL splunk-foo.com points to an ALB which load balances user logins between SH1, SH2, and SH3. For example you navigate to https://splunk-f... See more...
Scenario: 3 node SHC behind okta auth Suppose you have a URL splunk-foo.com points to an ALB which load balances user logins between SH1, SH2, and SH3. For example you navigate to https://splunk-foo.com > you get directed to SH1, then SH1 redirects you to an IDP (like OKTA for MFA) after you complete authentication then you are logged in. Lets say when you initiated the OKTA -idpCert.pem  creation you used the clientcert of SH1 server.pem.  Now you will notice that when you logout from SH2 or SH3 you get an error like >   IDP failed to handle logout request.Status="Status Code="urn:oasis:names:tc:SAML:2.0:AuthnFailed"   After re-reading Splunk docs, Okta docs, Community Posts, etc (becoming thoroughly confused)…  We inferred that OKTA needs a copy of the SH1 server.pem as the clientCert for all other SHC nodes (i.e. SH2 and SH3).  So we copied/renamed the SH1 server.pem > idp-okta.pem and dropped it in the .../etc/auth/ dir and then configured in  .../etc/system/local/ authentication.conf   the path like this>   [saml] #clientCert = /opt/splunk/etc/auth/server.pem clientCert = /opt/splunk/etc/auth/idp-okta.pem     Apparently this works. However, I am wondering if this is the correct way???   As I said before the docs are a bit cloudy regarding this OKTA setup for SHCs.   As a single search head deployment the steps would work. Please advise if there is a better way or there is some unanticipated SSL concern with this method. RE: >>> https://docs.splunk.com/Documentation/Splunk/8.0.6/Security/SAMLSHC This appears to be updated recently with a new directions... or maybe we just misunderstood... It seems that you should not submit a specific SH node server.pem to OKTA to create a idpCert, but rather create a new cert.pem and then install the new "saml" clientCert.pem and the resulting idpCert on all the SHC nodes. As a side question, if you were to change all the SHC nodes to use the same server.pem, (i.e. replace SH2 and SH3 server.pem with SH1 server.pem) would that cause ssl to break or mess up the SHC performance? Thank you in advance.
We have one ES search head in a distributed environment. 1. If the search head goes down, do alerts queue up and trigger actions once Splunk is back up? 2. If yes, for what period of time are alert... See more...
We have one ES search head in a distributed environment. 1. If the search head goes down, do alerts queue up and trigger actions once Splunk is back up? 2. If yes, for what period of time are alerts retained for? Thank you. 
How to calculate Latency Over Last Minute, Total Requests/min, LBs with Highest Unhealthy Host % in the load balancer dashboard. We are facing some production issue and i just try to figure out the r... See more...
How to calculate Latency Over Last Minute, Total Requests/min, LBs with Highest Unhealthy Host % in the load balancer dashboard. We are facing some production issue and i just try to figure out the root cause from Splunk but could not able to fetch the correct report for latency. Can some one please help me in this. We are using Splunk cloud instance. The following field we have in the ELB logs timestamp, elb, client_ip, client_port, request_processing_time, response_processing_time, elb_status_code, received_bytes, ssl_cipher,  ssl_protocol, request, backend_processing_time, backend_status_code Any help on this would be appreciated
I need to learn how Microsoft Email data is ingested into Splunk Ent. or ES for Auditing purposes. Appreciate any details.