All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a parti... See more...
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a particular index, If we perform a search on ES SH, we cannot see data. I mean, even if we perform the simplest query possible, which is: index=<index_name>   we go no result. Perhaps, if I try the same search on Core SH, data are shown. The behavior in my mind is very strange because it happened only with this specific index; all other remaining indexes return the same identical data, both  performing query on ES SH and Core SH. So in a nuthshell we can say: Index that return result on SH Core: N Index tha return result on ES Core: N - 1  
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (... See more...
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (code signature in <8E64DF20-704B-3A23-9512-41A3BCD72DEA> '/opt/splunk/lib/libbz2.1.0.3.dylib' not valid for use in process: library load disallowed by system policy), '/usr/lib/libbz2.1.dylib' (no such file, not in dyld cache) ERROR: pid 8605 terminated with signal 6
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] ... See more...
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success", "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200  
Hello, I have these two results, I need to compare them and tell me when they are different, could you help me. Regards.  
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR ... See more...
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR both IPv4 +IPv6.  HOSTNAME IPv4 IPv6 Status SampleA 0.0.0.1   IPv4 SampleB   0.0.0.2 IPv6 SampleC 0.0.0.3 A:B:C:D:E:F IPv4 + IPv6 Thanks in-advance!!!
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet ou... See more...
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet output: RampdataSet IntialMessage WAC 10 WAX 30 WAM 22 STC 33 STX 66 OTP 20   Query2: index=app-index source=application.logs "Initial message Successfull" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as SuccessfullMessage by RampdataSet output: RampdataSet SuccessfullMessage WAC 0 WAX 15 WAM 20 STC 12 STX 30 OTP 10 TTC 5 TAN 7 TXN 10 WOU 12   Query3: index=app-index source=application.logs "Initial message Error" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as ErrorMessage by RampdataSet output: RampdataSet ErrorMessage WAC 0 WAX 15 WAM 20 STC 12   We want to combine three queries and want to get the output as shown below, how to do that??? RampdataSet IntialMessage SuccessfullMessage ErrorMessage Total WAC 10 0 0 10 WAX 30 15 15 60 WAM 22 20 20 62 STC 33 12 12 57 STX 66 30 0 96 OTP 20 10 0 30 TTC 0 5 0 5 TAN 0 7 0 7 TXN 0 10 0 10 WOU 0 12 0 12  
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system a... See more...
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system and let me know if you're seeing this to. That will help me determine if this is a problem in my environment or a possible bug in the software. To test, find any macro in your environment. Establish baseline: Enter just the macro name in the search box and press ctrl+shift+e (or command+shift+e, I think, on MAC).  Note the length of time it takes for the modal pop up to show you the expanded macro. It is not necessary to run the search. `mymacro` Test issue: Using the same macro as above, create a simple search that has the macro inside of a sub-search. Try expanding the macro. Are you getting a slow response? For me, it's >20 seconds for it to expand the macro  |makeresults |append [`mymacro`] I appreciate the help from anyone willing to test. 
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable ... See more...
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable filesystem" I'm running RHEL 8 VM's, running Splunk 9.1, 2 indexers clustered  together and a cluster manager. I've attached external drives for hot and cold to each indexer. The external drives have been formatted in ext4 and set in fdisk to mount at boot every time as /mnt/splunk_hot and /mnt/splunk_cold and pointed indexes.conf by volume to them. They come up at boot, I can navigate to them and write to them. They're currently owned by root. I couldn't find who should have permission over them so I left them as is to start. I tried to enable OPTIMISTIC_ABOUT_FILE_LOCKING=1  but that didn't do anything. That being said, i suspect I've missed a step in the actions taken mounting the external drives.  I wasn't able to find specifics about the way I'm doing this, so I pose the question:  Am I doing something wrong, or missing a step on mounting these external drives? Is that now a bad practice?  I'm stumped. my indexes.conf: [volume:hot] path=/mnt/splunk_hot [volume:cold] path=/mnt/splunk_cold [abc] repFactor = auto homePath = volume:hot/abc/db coldPath = volume:cold/abc/db thawedPath = $SPLUNK_DB/abc/thaweddb ##We're not utilizing frozen storage at all so I left it default Any advice here would be greatly appreciated!
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actio... See more...
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done   When we run the above query , not all services getting captured,  but we have data, attached the screen shot(highlighted ones are missing). can anyone let me know what is the issue with the query.
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in t... See more...
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in the drop down list  from the list of host returned in the query itself?(using wildcard if host has t then test /if host has q -qa server, etc? but for now i am using static options  test - testhost qa - qahost prod - prodhost
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/... See more...
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml  below is the pkg version appdynamics==23.8.0.6197 appdynamics-bindeps is not getting pulled/installed  tried with latest version of appdynamics pkg .. same experience.. appdynamics==24.2.0.6567 this is happening only on mac m1 pro. adding explicitly "appdynamics-bindeps-linux-x64==23.8.0" in requirements.txt is giving below error. File "/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py", line 6, in <module> 2024-04-10 11:14:44 from . import (constants, error, message, context, 2024-04-10 11:14:44 ImportError: cannot import name 'constants' from partially initialized module 'appdynamics_bindeps.zmq.backend.cython' (most likely due to a circular import) (/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py)
Hi Team, what is the Events-per-second (EPS) in flat file with universal forwarder?
Hi Team, As checked our Splunk ITSI default schedule backup is taking more than 10 hours to complete, could you please assit us on this. Thanks
Does Anything Special need to be done when Installing Splunk 9.1.1 on RHEL 9.3? Or just follow the steps and it will be good to go? Thanks -David 
When setting up this receiver,  otel fails to start with this msg: Error: failed to resolving: yaml: line 89: did not find expected key Line 89 is smartagent/snmp: below is the collector config fo... See more...
When setting up this receiver,  otel fails to start with this msg: Error: failed to resolving: yaml: line 89: did not find expected key Line 89 is smartagent/snmp: below is the collector config for this snmp block in otel smartagent/snmp:   type: telegraf/snmp   agents:        - "172.xx.11.xx:xx2"    version: 2   community: "public"   fields:      name: "uptime"     oid: ".1.3.6.1.2.1.1.3.0"
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The pro... See more...
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The problem is that I am indexing events with a _time of -5h and a 2h difference from the event time stamp. Here is an example:   Time in the Edge Processor: It should be noted that the rest of the events that I ingest through this server are arriving at the correct time.
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to ... See more...
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to find any CDR data.  We are using Enterprise on-prem.
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format... See more...
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format: | Date | File RPWARDA | Count of File SPWARAA |  Count of File SPWARAA | Count of File SPWARRA | Diff (RPWARDA   - ( SPWARAA +SPWARRA ) ) | |2024/04/10 | 49 | 38 | 5 | 6 |   Is it possible using a splunk query ?    Original query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | eval DIR = if(file="RPWARDA" ,"IN","OUT") | convert timeformat="%Y/%m/%d" ctime(_time) AS Date | stats count by Date , file , DIR  
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-... See more...
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error linuxadmin@linuxvm:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error Can some one please help here .
How Splunk admin give access for a service account AB-CDRWYVH-L. Access for-  Splunk API read write access