All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dears, we have 2 applications called(Cortex & IST) & they are related to credit cards processing & management which are provided by FIS vendor these 2 APPS consist of 2 parts(JAVA & C), we succes... See more...
Dears, we have 2 applications called(Cortex & IST) & they are related to credit cards processing & management which are provided by FIS vendor these 2 APPS consist of 2 parts(JAVA & C), we successfully monitored the Java part, but we aren't able to monitor the C part as the source code doesn't exit & not provided by the FIS vendor... Is there anyone who succeeded in monitoring them or has any idea about how to do that? 
I am fairly confident that there is a clever workaround for this though I am not 100% sure how. I have alerts stored in apps on a deployer which makes use of the email function when triggered. I if ... See more...
I am fairly confident that there is a clever workaround for this though I am not 100% sure how. I have alerts stored in apps on a deployer which makes use of the email function when triggered. I if I need to add/remove recipients from the email alert I have to manually edit several different recipient lists for several different alerts. What I wan't is a clever way to set up som sort of "list" of recipients which I can name "developers" for instance, and instead of having 20 email adresses as recipients in the alert I could do something like "$devops$". Then just edit recipients at a single location for all alerts instead of each one separately. I hope this is a clear enough explanation for what I am hoping is possible and welcome all suggestions.  
Hi all, looking for help with how I can extract all available fields in a set of logs where a particular field sometimes does not exist. In Log A, the 'inline result' field exists, but in Log B it ... See more...
Hi all, looking for help with how I can extract all available fields in a set of logs where a particular field sometimes does not exist. In Log A, the 'inline result' field exists, but in Log B it does not and hence my current regex then fails for that log entry. I know I could probably use an Splunk app to auto manage this but I want to understand how I could do this myself. Any suggestions please? Log A %FTD-1-4xxxxx: DeviceUUID: X, InstanceID: 13, FirstPacketSecond: 2023-11-23, ConnectionID: y, SrcIP: 10.10.10.10, DstIP: 11.11.11.11, SrcPort: 666, DstPort: 999, Protocol: tcp, IngressInterface: z, EgressInterface: inta, IngressZone: intb, EgressZone: intc, Priority: 1, GID: 1, SID: 58724, Revision: 6, Message: SERVER-OTHER Apache Log4j logging remote code execution attempt, Classification: Attempted User Privilege Gain, Client: Web browser, ApplicationProtocol: HTTP, IntrusionPolicy: IntPolicy-000001, ACPolicy: ACpolicy_00001, AccessControlRuleName: ACrule-000001, NAPPolicy: Balanced Security and Connectivity, InlineResult: Would have blocked, IngressVRF: Global, EgressVRF: Global Log B %FTD-1-yyyyyy: DeviceUUID: Y, InstanceID: 15, FirstPacketSecond: 2023-11-23, ConnectionID: Z, SrcIP: 12.12.12.12, DstIP: 13.13.13.13, SrcPort: 111, DstPort: 222, Protocol: tcp, IngressInterface: Port-channel6, EgressInterface: INT1, IngressZone: INT2, EgressZone:INT3, Priority: 2, GID: 133, SID: 59, Revision: 1, Message: DCE2_EVENT__SMB_BAD_NEXT_COMMAND_OFFSET, Classification: Potentially Bad Traffic, WebApplication: SMBv3-unencrypted, Client: NetBIOS-ssn (SMB) client, ApplicationProtocol: NetBIOS-ssn (SMB), IntrusionPolicy: INTIDS, ACPolicy: ACBpolicy, AccessControlRuleName: ACBrule, NAPPolicy: Balanced Security and Connectivity, IngressVRF: Global, EgressVRF: Global
Hello, by default the fonts in a line chart are white. How can I change these colors to black?
Hi Everyone, I would like to ask you about configuration ITSI. I want to configure ITSI, as I show you below example. I have 3 services (service1, service2 and service3). If some KPI in the servic... See more...
Hi Everyone, I would like to ask you about configuration ITSI. I want to configure ITSI, as I show you below example. I have 3 services (service1, service2 and service3). If some KPI in the service3 is critical I want to see service 2 and 1 on critical. After 5 minutes I don't see crritical in service3 and I want to immediately change in the tree to normally state (green). Can I configure ITSI as I show you above?
  Hello,  when I run the below SPL , it gave me all the region that a user have accessed from. if I want to exclude a region or country from the list, please where do I add the SPL query and what i... See more...
  Hello,  when I run the below SPL , it gave me all the region that a user have accessed from. if I want to exclude a region or country from the list, please where do I add the SPL query and what is the SPL. I have used several exclusion query but it didn't work. please help      | tstats count(Authentication.user) FROM datamodel=Authentication WHERE (index=* OR index=*) BY Authentication.action Authentication.src | rename Authentication.* AS * | iplocation src | where len(Country)>0 AND len(City)>0
Hello Everyone, I have got the cluster agent & operator installed successfully and trying to auto-instrument the java agent. Cluster Agent is able to pickup the rules and identify the pod for instr... See more...
Hello Everyone, I have got the cluster agent & operator installed successfully and trying to auto-instrument the java agent. Cluster Agent is able to pickup the rules and identify the pod for instrumentation. When it starts creating the replica set, it's crashes the new pod with below error: ➜ kubectl logs --previous --tail 100 -p pega-web-d8487455c-5btrc ____ ____ _ | _ \ ___ __ _ __ _ | _ \ ___ ___| | _____ _ __ | |_) / _ \/ _` |/ _` | | | | |/ _ \ / __| |/ / _ \ '__| | __/ __/ (_| | (_| | | |_| | (_) | (__| < __/ | |_| \___|\__, |\__,_| |____/ \___/ \___|_|\_\___|_| |___/ v3.1.790 .. .. NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED Picked up JAVA_TOOL_OPTIONS: -Dappdynamics.agent.accountAccessKey=640299c1-a74f-47fc-96df-63e1e7188146 -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar Error opening zip file or JAR manifest missing : /opt/appdynamics-java/javaagent.jar Error occurred during initialization of VM agent library failed to init: instrument Cluster Agents Logs: [DEBUG]: 2023-11-23 00:48:09 - logpublisher.go:135 - Finished with all the requests for publishing logs [DEBUG]: 2023-11-23 00:48:09 - logpublisher.go:85 - Time taken for publishing logs: 187.696677ms [ERROR]: 2023-11-23 00:48:09 - executor.go:73 - Command basename `find /opt/appdynamics-java/ -maxdepth 1 -type d -name '*ver*'` returned an error when exec on pod pega-web-d8487455c-5btrc. unable to upgrade connection: container not found ("pega-pega-web-tomcat") [WARNING]: 2023-11-23 00:48:09 - executor.go:78 - Issues getting exit code of command 'basename `find /opt/appdynamics-java/ -maxdepth 1 -type d -name '*ver*'`' in container pega-pega-web-tomcat in pod ccre-sandbox/pega-web-d8487455c-5btrc [ERROR]: 2023-11-23 00:48:09 - javaappmetadatahelper.go:45 - Failed to get version folder name in container: pega-pega-web-tomcat, pod: ccre-sandbox/pega-web-d8487455c-5btrc, verFolderName: '', err: failed to find exit code [WARNING]: 2023-11-23 00:48:09 - podhandler.go:149 - Unable to find node name in pod ccre-sandbox/pega-web-d8487455c-5btrc, container pega-pega-web-tomcat [DEBUG]: 2023-11-23 00:48:09 - podhandler.go:75 - Pod ccre-sandbox/pega-web-d8487455c-5btrc is in Pending state with annotations to be u Kubectl exec throwing same error: ➜ kubectl -n sandbox get pod NAME READY STATUS RESTARTS AGE pega-backgroundprocessing-59f58bb79d-xm2bt 1/1 Running 3 (3h34m ago) 9h pega-batch-8f565b465-khvgd 0/1 Init:0/1 0 2s pega-batch-9ff8965fd-k4jgb 1/1 Running 3 (3h34m ago) 9h pega-web-55c7459cb9-xrhl9 0/1 Init:0/1 0 1s pega-web-57497b54d6-q28sr 1/1 Running 16 (3h33m ago) 9h ➜ kubectl -n sandbox get pod NAME READY STATUS RESTARTS AGE pega-backgroundprocessing-59f58bb79d-xm2bt 1/1 Running 3 (3h34m ago) 9h pega-batch-5d6c474c85-rkmxx 0/1 ContainerCreating 0 2s pega-batch-8f565b465-khvgd 0/1 Terminating 0 7s pega-batch-9ff8965fd-k4jgb 1/1 Running 3 (3h34m ago) 9h pega-web-55c7459cb9-xrhl9 0/1 Terminating 0 6s pega-web-57497b54d6-q28sr 1/1 Running 16 (3h33m ago) 9h pega-web-d8487455c-5btrc 0/1 Pending 0 1s ➜ kubectl exec -it pega-batch-5d6c474c85-rkmxx -n sandbox bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: unable to upgrade connection: container not found ("pega-pega-batch-tomcat") Also workload pod is running with custom user and not as root user, not sure if that's an issue for permission to copy the java agent binary? I assume it should not. I would appreciate if you could point in the right direct to troubleshoot this issue. PS: We are able to successfully do init-container method to instrument java-agent and it works fine. Document reference: https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent 
So I had a javascript file running on my dashboard and it was working fine. Recently I have uploaded an updated version of that javascript file. It shows my updated version on some computers or some... See more...
So I had a javascript file running on my dashboard and it was working fine. Recently I have uploaded an updated version of that javascript file. It shows my updated version on some computers or some users maybe? But for some people they are still seeing the old javascript file being served. Any idea on how to make sure that all users get the new javascript file served to them when they load the dashboard? I have tried clearing cache and restarting computer multiple times. I have also tried a /debug/refresh and _bump which obviously didnt solve my issue.
I need help locating the Logbinger log paths that are actively used in some of our servers. I was told I can find the list using Splunk's TA but when I click on "LogBinder" under apps, it shows blank... See more...
I need help locating the Logbinger log paths that are actively used in some of our servers. I was told I can find the list using Splunk's TA but when I click on "LogBinder" under apps, it shows blank, no data. Is there any other way to locate these paths in Splunk?  Thank you in advance!
We are getting error in our splunkd.log can you please help to resolve it?   11-21-2023 11:50:33.289 -0700 WARN AwsSDK [12369 ExecProcessor] - ClientConfiguration Retry Strategy will use the defa... See more...
We are getting error in our splunkd.log can you please help to resolve it?   11-21-2023 11:50:33.289 -0700 WARN AwsSDK [12369 ExecProcessor] - ClientConfiguration Retry Strategy will use the default max attempts. 11-21-2023 11:50:33.289 -0700 WARN AwsSDK [12369 ExecProcessor] - ClientConfiguration Retry Strategy will use the default max attempts. 11-21-2023 11:50:34.290 -0700 ERROR AwsSDK [12369 ExecProcessor] - CurlHttpClient Curl returned error code 28 - Timeout was reached 11-21-2023 11:50:34.291 -0700 ERROR AwsSDK [12369 ExecProcessor] - EC2MetadataClient Http request to retrieve credentials failed 11-21-2023 11:50:34.291 -0700 WARN AwsSDK [12369 ExecProcessor] - EC2MetadataClient Request failed, now waiting 0 ms before attempting again. 11-21-2023 11:50:35.292 -0700 ERROR AwsSDK [12369 ExecProcessor] - CurlHttpClient Curl returned error code 28 - Timeout was reached 11-21-2023 11:50:35.292 -0700 ERROR AwsSDK [12369 ExecProcessor] - EC2MetadataClient Http request to retrieve credentials failed 11-21-2023 11:50:35.292 -0700 ERROR AwsSDK [12369 ExecProcessor] - EC2MetadataClient Can not retrive resource from http://169.254.169.254/latest/meta-data/placement/availability-zone      Version: Splunk Universal Forwarder 9.0.6 (build 050c9bca8588)
I am getting below error from Splunkd. How to fix this root cause error. Please suggest some workaround.    
We have range of statua from 200 to 600. Want to search logs and create a output in below sample for range as 200 to 400 as success, 401 to 500 as exception, 501 to 500 as failure: Sucess - 100 E... See more...
We have range of statua from 200 to 600. Want to search logs and create a output in below sample for range as 200 to 400 as success, 401 to 500 as exception, 501 to 500 as failure: Sucess - 100 Exceptio - 44 Failure - 3 I am able to get above format data but getting duplicate rows for each category e.g. Success - 10 Success - 40 Sucess - 50 Exception - 20 Exception - 24 Failure - 1 Failure -2 Query  Ns=abc app_name= xyz | stats count by status | eval status=if(status>=200 and status<400,"Success",status) | eval status=if(status>=400 and status<500,"Exception",status) | eval status=if(status>=500,"Failure",status) Kindly help.
i have setup splunk on my local and now trying to connect to it via java code what i see is Service.connect() step passes   but further when i try to create the search job  jobs.create(mySearch);... See more...
i have setup splunk on my local and now trying to connect to it via java code what i see is Service.connect() step passes   but further when i try to create the search job  jobs.create(mySearch); import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import com.splunk.Application; import com.splunk.Service; import com.splunk.ServiceArgs; import com.splunk.*; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; /** * Log in using an authentication token */ public class SplunkTest { static Service service = null; /** * Authentication Token. * Actual token length would be longer than this token length. * @throws IOException * @throws ` */ //static String token = "1k_Ostpl6NBe4iVQ5d6I3Ohla_U5"; public static void main(String args[]) throws InterruptedException, IOException { HttpService.setSslSecurityProtocol( SSLSecurityProtocol.TLSv1_2 ); String token = "REDACTED"; ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setPort(8089); loginArgs.setHost("localhost"); loginArgs.setScheme("https"); loginArgs.setToken(String.format("Bearer %s", token)); // Initialize the SDK client service = Service.connect(loginArgs); System.out.println(service.getHost()); System.out.println("connected successfully"); JobCollection jobs = service.getJobs(); //System.out.println("There are " + jobs.size() + " jobs available to 'admin'\n"); // Create a simple search job String mySearch = "search * | head 5"; // Retrieves the collection of search jobs //JobCollection jobs = service.getJobs(); // Creates a search job Job job1 = jobs.create(mySearch);
I am working on a playbook and I'm facing a challenge in synchronizing and comparing the outputs from two different actions, in particular domain reputation checks via VirusTotal and Cisco Umbrella a... See more...
I am working on a playbook and I'm facing a challenge in synchronizing and comparing the outputs from two different actions, in particular domain reputation checks via VirusTotal and Cisco Umbrella apps, executed on multiple artifacts within a container. (the mentioned apps are just an example)   Below the two challanges that I'm facing:   Synchronizing and Comparing Action Outputs: my main issue is obtaining an output that allows me to verify and compare which IOCs have been flagged as malicious by both the VirusTotal and Cisco Umbrella apps. The current setup involves running both actions on each artifact in the container, but I'm struggling with how to effectively gather and compare these results to determine which IOCs are considered high-risk (flagged by both apps) versus low-risk (flagged by only one app). Filtering Logic Limitation in Splunk SOAR: Another issue is that the SOAR filtering logic is applied at the container level, not at the individual artifact level. This is problematic when a container has multiple IOCs, as benign IOCs might get included in the final analysis even after the application of the filter. I need an effective method to ensure that only artifacts identified as potentially malicious are shown for the final output.   Below an example of the scenario and the desired output: - A container contains multiple artifacts. - Actions executed: VirusTotal and Cisco Umbrella reputation checks on all artifacts. - Expected Output: A list or a summary indicating which artifacts are flagged as malicious by both apps, which would be classified as high-risk, and which are flagged by only one app, classified as low-risk.   I am looking for advice on how to structure the playbook to efficiently filter and analyze these artifacts, ensuring accurate severity assessment based on the action of the app results. Do you have any insights, examples, or best practices on how to define the filtering logic and analysis process in Splunk SOAR?   Thank you for your help
Hello, I manage Splunk hybrid (cloud SH, on-premise DS, HF etc). I have task to create custom roles and R-B-A-C. I have few questions and I would be thankful if you could help me clarify that: 1) ... See more...
Hello, I manage Splunk hybrid (cloud SH, on-premise DS, HF etc). I have task to create custom roles and R-B-A-C. I have few questions and I would be thankful if you could help me clarify that: 1) Do the custom roles populate between Splunk instances? Example, if I create role at cloud SH, will it populate automatically to other cloud SH and on-premise DS? Or do I have to create manually roles and assign users everywhere? 2) Is there a set of Splunk best practices for roles creation? 3) What is the difference if I create roles at web GUI vs backend (at on-prem instances)? Is the final result the same?
Hi, I want to find out how many license warnings there is in the current 60 day rolling window. Why is there not an easy way to find this? Surely this should be included in the license usage report? ... See more...
Hi, I want to find out how many license warnings there is in the current 60 day rolling window. Why is there not an easy way to find this? Surely this should be included in the license usage report? regards, Knut
I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLo... See more...
I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLogon="*" chg="*" seqTimestamp"*" currStateStart="*" currActCodeOid="*" currActStart="*" schedActCodeOid="*" schedActStart="*" nextActCodeOid="*" nextActStart="*" schedDate="*" adherenceStart="*" acdtimediff="*" | eval seqTimestamp=replace(seqTimestamp,"^(.+)T(.+)Z$","\1 \2") | eval currStateStart=replace(currStateStart,"^(.+)T(.+)Z$","\1 \2") | eval currActStart=replace(currActStart,"^(.+)T(.+)Z$","\1 \2") | eval schedActStart=replace(schedActStart,"^(.+)T(.+)Z$","\1 \2") | eval nextActStart=replace(nextActStart,"^(.+)T(.+)Z$","\1 \2") | eval adherenceStart=replace(adherenceStart,"^(.+)T(.+)Z$","\1 \2") | table AcdId, AgentId, AgentLogon, chg, seqTimestamp,seqTimestamp1, currStateStart, currActCodeOid, currActStart, schedActCodeOid, schedActStart, nextActCodeOid, nextActStart, schedDate, adherenceStart, acdtimediff Below are the results I am getting:
Hi, There are a lot of clients in my architecture and every other splunk instance is deployed in either /opt/bank/splunk OR /opt/insurance/splunk OR /opt/splunk   Hence I want to run a command to ... See more...
Hi, There are a lot of clients in my architecture and every other splunk instance is deployed in either /opt/bank/splunk OR /opt/insurance/splunk OR /opt/splunk   Hence I want to run a command to extract list of all clients along with the path where splunkd is running.   How can i achieve this, please suggest
Hello, I´m trying to resolve monitoring issue of available .csv files of specific directory. There are several files marked by different date e.g. 2023-11-16_filename.csv or 2023-11-20_filename.csv. ... See more...
Hello, I´m trying to resolve monitoring issue of available .csv files of specific directory. There are several files marked by different date e.g. 2023-11-16_filename.csv or 2023-11-20_filename.csv. None of them has the same date at the beginning for this reason. I´m able synch with the server most of the files but there are some which I´m not. For example my indexing started on 02.10.23 and all the files matching or later are available as source. But all the files before this date are not e.g. 2023-09-15_filename.csv. What could cause this performance and is there a way how to push files to available as a source even they marked with the date before 02.10.2023 ? Thanks