All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone! Is it possible to pass a parameter from search to the next "action|url" step? Like in description: $result$ if not, is it possible to somehow change this behavior by modifying this nex... See more...
Hi everyone! Is it possible to pass a parameter from search to the next "action|url" step? Like in description: $result$ if not, is it possible to somehow change this behavior by modifying this next step, if yes, then how? Thanks.
Hi All, I am trying to create a dashboard with total calls to a particular business transaction using ADQL query. I am able to fetch this from All applications, except from Lambda application, altho... See more...
Hi All, I am trying to create a dashboard with total calls to a particular business transaction using ADQL query. I am able to fetch this from All applications, except from Lambda application, although the transactions are showing in application dashboard, but unable to get the count using query. Also unable to list the BTs of that application using query.  The same query is working for other application and business transactions. PFB the query which I used. SELECT count(*) FROM transactions WHERE application = "test-1" AND transactionName = "/test/api/01/" Please check and let me know, why I am not able to pull this.  Regards Fadil
Hi, What does session end reason = aged-out mean? We are facing one way audio issue. Could this be possibly a reason?  thanks  
Hi, I am currently working on an nginx plus as ingress controller for my kubernetes and using sc4s to forward logs to splunk enterprise. However I notice that sc4s does not forward all of logs includ... See more...
Hi, I am currently working on an nginx plus as ingress controller for my kubernetes and using sc4s to forward logs to splunk enterprise. However I notice that sc4s does not forward all of logs include the approtect WAF and DoS. Does the WAF and DoS require special setup to forward logs? I tried with syslog-ng https://github.com/nginxinc/kubernetes-ingress/blob/v3.6.2/examples/ingress-resources/app-protect-dos/README.md like this example but the logs is not showing on splunk enterprise. Thanks.
Hi All -  I need help with a fairly complex search i am being asked to build by a user. The ask is that the below fields are extracted from this XML sample: [2024-09-10 07:27:46.424 (TID:14567... See more...
Hi All -  I need help with a fairly complex search i am being asked to build by a user. The ask is that the below fields are extracted from this XML sample: [2024-09-10 07:27:46.424 (TID:14567876)] <subMerchantData> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <pfId>499072</pfId> [2024-09-10 07:27:46.424 (TID:145767627)] <subName>testname</subName> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subId>123456</subId> [2024-09-10 07:27:46.424 (TID:145767627)] <subStreet>1 TEST LANE</subStreet> [2024-09-10 07:27:46.424 (TID:145767627)] <subCity>HongKong</subCity> [2024-09-10 07:27:46.424 (TID:145767627)] <subState>HK</subState> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subCountryCode>344</subCountryCode> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subPostalCode>1556677</subPostalCode> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subTaxId>-15566777</subTaxId> [2024-09-10 07:27:46.424 (TID:14567876)] </subMerchantData> This search doesn't pull anything back, i believe because they are not extracted fields index=test merchantCode=MERCHANTCODE1 subCountryCode=* subState=* orderCode=* | stats count by merchantCode subCountryCode subState orderCode In addition to these fields SubState, SubCountryCode, SubCity, PFID, SubName, SubID, SubPostalCode, SubTaxID However i'm not sure how this can be fulfilled, could anyone support with writing a search that would allow me to extract this info within a stats count? Thanks, Tom
Hi splunkers ! I m facing an issue that is going to make me crazy ! I've got to set the timestamp in the following logs (timestamp field is the 11th field, the first one being the insert time by the ... See more...
Hi splunkers ! I m facing an issue that is going to make me crazy ! I've got to set the timestamp in the following logs (timestamp field is the 11th field, the first one being the insert time by the proxy himself)  : 2024-09-16T13:12:54+02:00 Logging-Client  "-1","username","1.2.3.4","POST","872","2211","www.facebook.com","/csp/reporting/","OBSERVED","","1726484997","2024-09-16 11:09:57","https","Social Networking","application/x-empty","","Minimal Risk","Remove 'X-Forwarded-For' Header","200","10.97.5.240","","","Firefox","102.0","Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0","firefox.exe","1.2.3.4","443","US","","t","t","t","f","f","computerName","","1.2.3.4","1.2.3.4","8080" So, I'm using a regex to extract fields and set the real timestamp in my props.conf :  [mySourcetype] SHOULD_LINEMERGE = false EXTRACT-mySourcetype = ^[^,\n]*,"(?P\w+)","(?P[^"]+)","(?P\w+)","(?P[^"]+)[^,\n]*,"(?P[^"]+)[^,\n]*,"(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)"$ TIME_PREFIX = (?:[^,]+,){11} TIME_FORMAT = %Y-%m-%d %H:%M:%S     Then, Ive got different results based on different source: Upload a file directly in the search head                      Extraction    Ok         Timestamp    OK File red from an universal forwarder                       Extraction     OK           Timestamp  Failed   The is NO heavy forwarder between the UF and the indexers. The props.conf is deployed only on the SearchHeads. So, Something is tricky here !   If someone got an idea, I will apreciate ! Cheers.
Hello there, im creating a visualization using dashboard studio and showing some field with single value. But the data font size displayed is dynamic like the capture.   How to make the data fo... See more...
Hello there, im creating a visualization using dashboard studio and showing some field with single value. But the data font size displayed is dynamic like the capture.   How to make the data fontsize is static in a certain size?. I already add fontSize value but there is no changes: "options": {         "fontSize": 24     }
We have a cluster with two search heads and two indexers. We need to install the Enterprise Security app on the search heads. The question arises regarding the summary index and indexes created durin... See more...
We have a cluster with two search heads and two indexers. We need to install the Enterprise Security app on the search heads. The question arises regarding the summary index and indexes created during the Enterprise Security installation, like IOC and notable. Should these indexes be created with the same names on our indexers?
While I try to download the extension manager am unable to access the page or unable to download it. Can you please help me download the appdynamics-extensionmanage? r.zip file for windows 
I have seen the splunk documentation for setting up Splunk Multisite Cluster but I have not seen anything related to Monitoring Console & SH Deployer. Can some one suggest on how to setup these two ?
Hi, I am having hard time extracting multi value fields present in an event using transforms mv_add=true, it seems to be partially working by just extracting the first and third value present in t... See more...
Hi, I am having hard time extracting multi value fields present in an event using transforms mv_add=true, it seems to be partially working by just extracting the first and third value present in the event but skipping the second and the fourth value. The regex which i am using seems to be perfectly matching for all the values in regex101 but not sure why Splunk is unable to capture it all. Following is the sample event and regex I am using - Event - postreport=test_west_policy\;passed\;(first_post:status:passed:pass_condition[clear]:fail_condition[]:skip_condition[]\;second_post:status:skipped:pass_condition[clear]:fail_condition[]:skip_condition[timed_out]\;third_post:status:failed:pass_condition[]:fail_condition[error]:skip_condition[]\;fourth_post:status:passed:pass_condition[clear]:fail_condition[]:skip_condition[]) Regex - https://regex101.com/r/r66eOz/1  (?<=\(|]\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\] so splunk is just matching all values for first_post and third_post in above event and skipping the second_post & fourth_post.. the same regex i tried with rex command and in that it just matches first_post field values  - |rex field=raw_msg max_match=0 "(?<=\(|]\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\]" Can someone please help me figure if i am missing something here. Thanks.
Hi All, i am using mvzip while working with JSON file. Now in the new Splunk dashboards seems like mvzip command is depricated. Is there any way to extract values from nested JSON apart from mvzip?
Props used:     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sEDT\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S EDT %Y   While using a... See more...
Props used:     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sEDT\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S EDT %Y   While using above props only 2 digits date( Aug 25, Aug 28) is getting extracted, but not single digit date(Aug 2, Aug 5). How to modify line breaker so that it can take both the logs. Any help would be appreciated.    [Mon Aug 5 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 2 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 25 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 28 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found.    
Hello, I'm trying to write a query where I provide a key identifier (say "A"), and the query both finds matching results, but also uses a field from those results as a filter to another query that ... See more...
Hello, I'm trying to write a query where I provide a key identifier (say "A"), and the query both finds matching results, but also uses a field from those results as a filter to another query that provides additional data that's needed.   Obfuscating some things, this is the idea, and the closest I've gotten: index=ind1 earliest=-1d field1=abc | append [search index=ind1 earliest=-1d "A" field1=xyz | rename field2 as f2] | where field2=f2 OR field1="xyz" The idea is that results where field1=xyz and contain "A" have another field, "field2", that is present and has a matching value when field1=xyz or field1=abc.  So I want to be able to search based "A" and get back results where field1=xyz or field1=abc where field2 matches between those 2 sets. I do think a join would probably work here, but I've heard there can be performance issues with that so I was trying to avoid that.  It seems to me that I can't use "where field2=f2", and it also seems the parent search is pulling in a lot of data because of the generally broad terms (I suppose because the piped where command is applied after the fact).  Any ideas of how to write this performantly?
 Hello AppDynamics Community Team.  I'm trying to disable the Appd Agent to get more licenses free. Our team's strategy is removing the APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY so that the agent canno... See more...
 Hello AppDynamics Community Team.  I'm trying to disable the Appd Agent to get more licenses free. Our team's strategy is removing the APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY so that the agent cannot connect with AppD UI, avoiding license consumption. Is that a good approach? Is it work?  The idea is to get more licenses to disable applications that for now are not necessary de AppD. When these licenses are free, we use them to enable AppD for the application we need AppD instrumented. Thanks! 
Since upgrading to the new 4.0.4 release of the Lookup File Editor, the app no longer shows any lookups in the main interface.  The Status page says the REST handler is offline, and while the trouble... See more...
Since upgrading to the new 4.0.4 release of the Lookup File Editor, the app no longer shows any lookups in the main interface.  The Status page says the REST handler is offline, and while the troubleshooting page references that, it provides no recommendations aside from restarting Splunk. Well, we have restarted everything at least 3 times, but still cannot access our lookups.  What does a REST handler being offline even mean?  Is that a setting?  Can the Search Head just not see the REST interface?  We couldn't find any settings or conf files within the editor app that define a particular address.  The app resides on the search head, along with the lookups, so I can't imagine it is a firewall issue. This is the only error we are seeing in the internal logs: 09/16/2024 09:50:24 AM -0500 CDT ERROR Failed to handle request due to an unhandled exception Traceback (most recent call last):   File "D:\Splunk\etc\apps\lookup_editor\bin\lookup_editor\rest_handler.py", line 196, in handle     return function_to_call(request_info, **query)   File "D:\Splunk\etc\apps\lookup_editor\bin\lookup_editor_rest_handler.py", line 688, in post_file_size             lookup_author = res["entry"][i]["author"] KeyError: 'author' Help?
Hi all, I've got a lookup file called devices.csv that contains 2 fields, hostname and ip_address. The index I'm searching has 2 fields, src_ip and dest_ip. I'd like to exclude results where ... See more...
Hi all, I've got a lookup file called devices.csv that contains 2 fields, hostname and ip_address. The index I'm searching has 2 fields, src_ip and dest_ip. I'd like to exclude results where both the src_ip and dest_ip fields match an IP address from my lookup file, it doesn't need to be the same IP, it just needs to be listed in that CSV. If either the src_ip field or the dest_ip field doesn't contain an IP address listed in the ip_address field I would expect to see it. I'm just looking for advice on whether this is the best way of querying the data. Current query: index=network_traffic AND NOT ([| inputlookup devices.csv | fields ip_address | rename ip_address AS src_ip] AND [| inputlookup devices.csv | fields ip_address | rename ip_address AS dest_ip])
Hi Splunkers, I am trying to configure rest api monitoring via splunk add-on builder but while configuring when i am trying to test the result i am receiving SSL error. Splunk-Add-on Builder Ve... See more...
Hi Splunkers, I am trying to configure rest api monitoring via splunk add-on builder but while configuring when i am trying to test the result i am receiving SSL error. Splunk-Add-on Builder Version:4.3.0 Splunk Enterprise Version:9.1.1 What could be done to mitigate this SSL error? Awaiting quick help and response Pasting the error herewith: 2024-09-16 15:28:49,569 - test_rest_api - [ERROR] - [test] HTTPError reason=HTTP Error HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) when sending request to url=https://endpoints.office.com/version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 method=GET Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 722, in urlopen chunked=chunked, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 404, in _make_request self._validate_conn(conn) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 1060, in _validate_conn conn.connect() File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connection.py", line 429, in connect tls_in_tls=tls_in_tls, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket session=session File "/splunk/lib/python3.7/ssl.py", line 878, in _create self.do_handshake() File "/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/adapters.py", line 497, in send chunked=chunked, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 802, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/util/retry.py", line 594, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 230, in _retry_send_request_if_needed uri=uri, body=body, method=method, headers=headers File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 219, in _send_internal verify=self.requests_verify, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/adapters.py", line 517, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/engine.py", line 308, in _send_request response = self._client.send(request) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 296, in send url, request.method, request.headers, request.body File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 243, in _retry_send_request_if_needed raise HTTPError(f"HTTP Error {err}") from err cloudconnectlib.core.exceptions.HTTPError: HTTP Error HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) 2024-09-16 15:28:49,570 - test_rest_api - [INFO] - [test] This job need to be terminated. 2024-09-16 15:28:49,570 - test_rest_api - [INFO] - [test] Job processing finished 2024-09-16 15:28:49,571 - test_rest_api - [INFO] - [test] 1 job(s) process finished 2024-09-16 15:28:49,571 - test_rest_api - [INFO] - [test] Engine executing finished 
I'm upgrading splunk enterprise to 9.3 using the rpm file, but when I run  rpm -U splunk-9.3.0-51ccf43db5bd.x86_64.rpm it installs all the folders, but removes the bin directory, so I can't then st... See more...
I'm upgrading splunk enterprise to 9.3 using the rpm file, but when I run  rpm -U splunk-9.3.0-51ccf43db5bd.x86_64.rpm it installs all the folders, but removes the bin directory, so I can't then start splunk. i've searched through the communities, and a few people seem to have hit the issue on windows, but not linux. how can I get around this issue? thanks Dabbsy
Hi, I have an App that has a set of icons that work fine on light mode, but if I switch to dark mode, they become invisible. If I add the lighter icons and dark mode, then the icons become invisible... See more...
Hi, I have an App that has a set of icons that work fine on light mode, but if I switch to dark mode, they become invisible. If I add the lighter icons and dark mode, then the icons become invisible in light mode. Is there a way to have both sets of icons and have them change based on the active mode?