All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all i wrote a dummy java/quarkus app that fetches html data from any web page. my goal is to see the tier pointing to the endpoints in appdy's flowmap my code @GET @Path("/1") @Produce... See more...
Hi all i wrote a dummy java/quarkus app that fetches html data from any web page. my goal is to see the tier pointing to the endpoints in appdy's flowmap my code @GET @Path("/1") @Produces(MediaType.TEXT_PLAIN) public String test1(){ try { CloseableHttpClient httpClient = HttpClients .custom() .setSSLContext(new SSLContextBuilder().loadTrustMaterial(null, TrustAllStrategy.INSTANCE).build()) .build(); HttpGet request = new HttpGet("https://nylen.io/d3-spirograph/"); CloseableHttpResponse response = httpClient.execute(request); System.out.println(response.getProtocolVersion()); // HTTP/1.1 System.out.println(response.getStatusLine().getStatusCode()); // HTTP/1.1 System.out.println(response.getStatusLine().getReasonPhrase()); // OK System.out.println(response.getStatusLine().toString()); // HTTP/1.1 200 OK HttpEntity entity = response.getEntity(); if (entity != null){ String result = EntityUtils.toString(entity); response.close(); return result; } } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (KeyStoreException e) { e.printStackTrace(); } catch (KeyManagementException e) { e.printStackTrace(); } return "ok"; } path /2 @GET @Path("/2") @Produces(MediaType.TEXT_PLAIN) public String test2() throws Exception{ SSLContext sslcontext = SSLContext.getInstance("TLS"); sslcontext.init(null, new TrustManager[]{new X509TrustManager() { public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {} public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {} public X509Certificate[] getAcceptedIssuers() { return new X509Certificate[0]; } }}, new java.security.SecureRandom()); // Client client = ClientBuilder.newClient() Client client = ClientBuilder.newBuilder() .sslContext(sslcontext) .hostnameVerifier((s1, s2) -> true) .build(); String ssb = "https://self-signed.badssl.com/"; String response = client.target(ssb) //.queryParam("query", "q") .request() .accept("text/html") .get(String.class); // .post(Entity.entity("e", "text/plain"), String.class); client.close(); return response; } start app java -javaagent:/opt/appdynamics-agent/ver21.8.0.32958/javaagent.jar \ -jar /root/quarkus/vintageStore/rest-book/target/quarkus-app/quarkus-run.jar starting logs ... Agent runtime conf directory set to /opt/appdynamics-agent/ver21.8.0.32958/conf [AD Agent init] Tue Oct 19 01:26:51 BRT 2021[INFO]: AgentInstallManager - Agent runtime conf directory set to /opt/appdynamics-agent/ver21.8.0.32958/conf [AD Agent init] Tue Oct 19 01:26:51 BRT 2021[INFO]: JavaAgent - JDK Compatibility: 1.8+ [AD Agent init] Tue Oct 19 01:26:51 BRT 2021[INFO]: JavaAgent - Using Java Agent Version [Server Agent #21.8.0.32958 v21.8.0 GA compatible with 4.4.1.0 r38646896978b0b95298354a38b015eaede619691 release/21.8.0] [AD Agent init] Tue Oct 19 01:26:51 BRT 2021[INFO]: JavaAgent - Running IBM Java Agent [No] [AD Agent init] Tue Oct 19 01:26:51 BRT 2021[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics-agent/ver21.8.0.32958] [AD Agent init] Tue Oct 19 01:26:51 BRT 2021[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics-agent/ver21.8.0.32958] Agent logging directory set to [/opt/appdynamics-agent/ver21.8.0.32958/logs] [AD Agent init] Tue Oct 19 01:26:51 BRT 2021[INFO]: JavaAgent - Agent logging directory set to [/opt/appdynamics-agent/ver21.8.0.32958/logs] getBootstrapResource not available on ClassLoader Registered app server agent with Node ID[234307] Component ID[94762] Application ID [55102] Started AppDynamics Java Agent Successfully. ____ _ problem is i can not see "Service Endponits" being discovered.
Hello Splunk Community,  Can anyone help me build a query based on the below;   I have built a query which calculates the total duration of multiple events over a period of time.  What I am tryin... See more...
Hello Splunk Community,  Can anyone help me build a query based on the below;   I have built a query which calculates the total duration of multiple events over a period of time.  What I am trying to do now is create a timechart to show the _time on x-axis and duration on y-axis.  I think I need to convert the duration from hours to minutes but not sure how to do this. Below is an example of the output from my original query I am trying to visualise in a timechart;  _time  duration (in hours) 2021-10-12 03:56:30 2021-10-13 04:27:25 2021-10-14 04:21:03 2021-10-18 07:11:04   THANK YOU (in advance)
Hi All, After a bit of googling I've come up empty with regards to being able to identify security issues that have been addressed as part of each Splunk Enterprise version update. Just wondering i... See more...
Hi All, After a bit of googling I've come up empty with regards to being able to identify security issues that have been addressed as part of each Splunk Enterprise version update. Just wondering if there anyone has a link or can provide some details on where I can find this information? I have looked through the release notes pages, but these seem to only list functional improvements / fixes. Appreciate anyone that can help with this.
  Hello, I have an issue writing props configuration for text source file which contains first 2 line (including "----" line) as header info. Please see 3 sample events along with 2 header lines be... See more...
  Hello, I have an issue writing props configuration for text source file which contains first 2 line (including "----" line) as header info. Please see 3 sample events along with 2 header lines below. I also included the props that I wrote for this source file, but not working as expected....getting some error message "failed to parse timestamp". Any help will he highly appreciated. Thank you so much. Sample data Event_id  user_id   group_id  create_date  create_login  company_event_id  event_name   ----------------- ----------- ----------- ----------------------- ------------ ------------------------- -------------- 105  346923 NULL  2021-10-07 14:13:21.160 783923 45655234 User Login  250 165223 NULL 2021-10-07 15:33:54.857   566923  92557239 User Login  25 1168923 NULL 2021-10-07 16:44:05.257   346923  34558242 User Login    props config file I wrote SHOULD_LINEMERGE=false INDEXED_EXTRACTIONS=csv TIMESTAMP_FIELDS=create_date TIME_FORMAT=%Y-%m-%d  %H:%M:%S.%3N HEADERFIELD_LINE_NUMBER=1
Hi, This is my first time setting up Splunk in Kubernetes by using Splunk Operator. I have set up the cluster just fine. One challenge I'm having now is to deploy my Splunk Apps to our search head ... See more...
Hi, This is my first time setting up Splunk in Kubernetes by using Splunk Operator. I have set up the cluster just fine. One challenge I'm having now is to deploy my Splunk Apps to our search head cluster. Here is the docs that I followed: https://splunk.github.io/splunk-operator/AppFramework.html The issues are: 1. My deployer keeps getting undeployed everytime I make changes to the SHC CRD. idk why? 2. The app is simply not getting deployed. The app's .tgz file is already in my S3 bucket. Here's the spec of my SHC   ... spec: appRepo: appSources: - location: searchHeadApps/ name: assettrackerapp.tgz appsRepoPollIntervalSeconds: 30 defaults: scope: cluster volumeName: volume_app_repo_us volumes: - endpoint: https://dev-splunk-operator.s3.amazonaws.com name: volume_app_repo_us path: dev-splunk-operator provider: aws secretRef: s3-secret storageType: s3 ...   Here are some of the splunk-operator logs:   {"level":"info","ts":1634593053.3164997,"logger":"splunk.enterprise.ValidateAppFrameworkSpec","msg":"App framework configuration is valid"} {"level":"info","ts":1634593053.3165247,"logger":"splunk.enterprise.initAndCheckAppInfoStatus","msg":"Checking status of apps on remote storage...","name":"sh","namespace":"splunk"} {"level":"info","ts":1634593053.3165333,"logger":"splunk.enterprise.GetAppListFromS3Bucket","msg":"Getting the list of apps from remote storage...","name":"sh","namespace":"splunk"} {"level":"info","ts":1634593053.3198195,"logger":"splunk.enterprise.GetRemoteStorageClient","msg":"Creating the client","name":"sh","namespace":"splunk","volume":"volume_app_repo_us","bucket":"dev-splunk-operator","bucket path":"searchHeadApps/"} {"level":"info","ts":1634593053.3199255,"logger":"splunk.client.InitAWSClientSession","msg":"AWS Client Session initialization successful.","region":"","TLS Version":"TLS 1.2"} {"level":"info","ts":1634593053.319938,"logger":"splunk.client.GetAppsList","msg":"Getting Apps list","AWS S3 Bucket":"dev-splunk-operator"} {"level":"error","ts":1634593053.3199534,"logger":"splunk.client.GetAppsList","msg":"Unable to list items in bucket","AWS S3 Bucket":"dev-splunk-operator","error":"MissingRegion: could not find region configuration"   Please advise, thank you.
I have some data like the following: NAME Code Suzy 0 John 0 Adam 1 Suzy 1 John 0 Adam 1   I am trying to calculate the ratio of code=1 to code=0, by Name, and displ... See more...
I have some data like the following: NAME Code Suzy 0 John 0 Adam 1 Suzy 1 John 0 Adam 1   I am trying to calculate the ratio of code=1 to code=0, by Name, and display these ratios by hour. The name values are dynamic and unknown at query time. I can get halfway there, using a dynamic eval field name, like this: index=SOME_INDEX sourcetype=SOME_SOURCETYPE code | eval counterCode0{name} = if(code=0, 1, 0) | eval counterCode1{name} = if(code=1, 1, 0) | bin _time span=1m | stats sum(counterCode0*), sum(counterCode1*) by _time   But I can't figure out how to get the ratios of counterCode1* to counterCode0*. Any ideas? Or do I need to approach this problem differently?  
| inputlookup file1.csv field1 field2 1 a 2 b 3 c   it is necessary so 1 2 3 a b c     help! Thanks   
Hi how can i extract table like this: (“myserver” is a field that already extracted) source        destination   duration    V server1      myserver        0.001       9288 myserver   server2   ... See more...
Hi how can i extract table like this: (“myserver” is a field that already extracted) source        destination   duration    V server1      myserver        0.001       9288 myserver   server2           0.002       9288 server2       myserver       0.032       0298 myserver    server1           0.004       9298 FYI: duration calculate as described below: Line1 (duration  00:00:00.001) = (12:00:59.853) - (12:00:59.852) Line2 (duration 00:00:00.002) = (start_S 12:00:59.855) - (start_S 12:00:59.853) Line3 (duration 00:00:00.110) = (forWE_APP_AS: G 12:00:59.994) - (forWE_APP_AS: P   12:00:59.884) Line4 (duration 00:00:00.004) = (end_E 12:01:00.007) - (end_E 12:01:00.003)   Here is the log:  (G=get, P=push) 12:00:59.852 app     module1: G[server1]Q[000]V[9288] 12:00:59.853 app     start_S: A_B V[9288]X[000000]G[0]L: 12:00:59.855 app     module2: A_B V[9288]X[000000]G[0]L: 12:00:59.855 app     start_S: C_D V[9288]X[000000]G[0]L: 12:00:59.881 app     module3: A_B V[9288]X[000000]G[0]L: 12:00:59.884 app     forWE_APP_AS: P[server2]K[000]V[0288] 12:00:59.994 app     forWE_APP_AS: G[server2]K[000]V[0298] 12:00:59.995 app     module2: A_B V[9298]X[000000]G[0]K: 12:01:00.003 app     end_E: A_B V[9298]X[000000]G[0]K: 12:01:00.007 app     module1: P[server1]K[458]V[9298] 12:01:00.007 app     end_E: C_D V[9298]X[000000]G[0]K:   any idea?  Thanks 
Have any of you upgraded Windows-based Universal Forwarders using WSUS? If so, what kind of syntax did you use when deploying it? I've tried this once in the past and don't believe it was totally suc... See more...
Have any of you upgraded Windows-based Universal Forwarders using WSUS? If so, what kind of syntax did you use when deploying it? I've tried this once in the past and don't believe it was totally successful: msiexec.exe /i <file path><file name>.msi AGREETOLICENSE=Yes DEPLOYMENT_SERVER=<our_deployment_server>:8089 /quiet Is there anything wrong in this syntax, or anything that I may have missed that should be corrected for next time?
Is it possible to have a Splunk sandbox in the cloud, and to occasionally refresh it with a few weeks of data from a terrestrial instance from a physical DC?
Hi, Say we have an action (lets call it Action1) that returns this under data: [ {"type": "type1", "target": "target value1"}, {"type": "type2", "target": "target value2"} ] I want to pass the ... See more...
Hi, Say we have an action (lets call it Action1) that returns this under data: [ {"type": "type1", "target": "target value1"}, {"type": "type2", "target": "target value2"} ] I want to pass the target to another action (Action2) as parameter so I use action_result.data.*.target datapath to do it. the action returns this: [ {"result_from_action": "result_for target value1"}, {"result_from_action": "result_for target value2"} ] Each row corresponds to the input row. We have a third action (lets call it Action3) that accepts two parameters - the type from Action1 and the result_from_action from Action2 , So i pass: - action_result.data.*.type from Action1 - action_result.data.*.result_from_action from Action2 I want the Action3 to be executed 2 times - for two pairs "type1", "result_for target value1" and  "type2", "result_for target value2" but in reality the action will be executed 4 times for all the possible permutations. I understand why is this happening but im curious if there's a good way to force the platform to do what i need (without using custom functions to build another list and use it as input).   Thanks!  
i am trying to integrate dashboard studio with our external app using splunk react components. i am able to see graphs and other components.   only problem is time range component with is giving f... See more...
i am trying to integrate dashboard studio with our external app using splunk react components. i am able to see graphs and other components.   only problem is time range component with is giving following error.   "Cannot access splunkweb."     below is my definition.json { "visualizations": {}, "dataSources": { }, "inputs": { "input_1": { "type": "input.timerange", "title": "Select Time", "options": { "defaultValue": "-5m,now", "token": "trp" } } }, "layout": { "type": "absolute", "options": {}, "structure": [], "globalInputs": [ "input_1" ] }, "description": "", "title": "TRP Input Dash" }   Thanks Shailendra
I am trying to get the 14-day free trial for Splunk Cloud and keep getting the "An internal error was detected when creating the stack" error. I saw that this has been an issue for several other peop... See more...
I am trying to get the 14-day free trial for Splunk Cloud and keep getting the "An internal error was detected when creating the stack" error. I saw that this has been an issue for several other people. How to I get this trial? I need it for a school assignment.
I have the follow situation: queryA returns correlations AAA BBB CCC DDD queryB returns correlations  AAA CCC EEE Expect result is the queryA events with correlations AAA and CCC. i ne... See more...
I have the follow situation: queryA returns correlations AAA BBB CCC DDD queryB returns correlations  AAA CCC EEE Expect result is the queryA events with correlations AAA and CCC. i need a query that compare the field correlation between them and if are equals show me the queryA events. Thanks
I am currently using a lookup to find matching IDs in my data. The lookup table is like 400k rows and if I use inputlookup with a join or append there is a limit to the amount of rows that is searche... See more...
I am currently using a lookup to find matching IDs in my data. The lookup table is like 400k rows and if I use inputlookup with a join or append there is a limit to the amount of rows that is searched for from the lookup table. I am now using just the command "lookup" to find the matching data and it works without any truncating warnings but I'm wondering if there is a limit for this command similar to subsearches. I can't seem to find anything in the lookup documentation. sample search index=some_index | lookup users_list.csv ID OUTPUTNEW username I output a new variable so that I can do " search username=*" since username is a new field and that will give me only matching IDs in my lookup table.
I need to modify the limits.conf for an index cluster. My question is if i modify /$Splunk/etc/system/local/limits.conf can this be done on the cluster manager and pushed out or does this need to be ... See more...
I need to modify the limits.conf for an index cluster. My question is if i modify /$Splunk/etc/system/local/limits.conf can this be done on the cluster manager and pushed out or does this need to be modified on the individual indexers themselves?
First Event INFO | 2021-10-18 05:17 AM | BUSINESS RULE | Payload for ID#: 40658606156551247672591634534230307 with status Approved is published Second Event msg:  INFO | 2021-10-14 10:38 PM |  Mes... See more...
First Event INFO | 2021-10-18 05:17 AM | BUSINESS RULE | Payload for ID#: 40658606156551247672591634534230307 with status Approved is published Second Event msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.67","CurrencyCode":"AUD","Buid":"1401","OrderNumber":"877118406","ID":"58916"}   I want to have sum of InputAmountToCredit based on status . status can vary to different statuses and ID is common field for both the events what is best way to sum the amount with the same status for specified timeframe   Thanks for all the support.
I need to index a file: /var/log/file.txt. This file runs every day, but sometimes the content doesn't change. This leaves me with no events on days that remain the same. I need it to index every tim... See more...
I need to index a file: /var/log/file.txt. This file runs every day, but sometimes the content doesn't change. This leaves me with no events on days that remain the same. I need it to index every time the timestamp changes on the file. I believe I need to add crcSalt =<SOURCE> to the inputs.conf in order to reindex it. However, my inputs monitors all files in /var/log. So if I add that to that input monitor, it would likely apply to all files in var log reindexing them all every time. Something I don't want. How can I reindex just this file daily while leaving the other files in the directory unchanged?  Many thanks
I am trying to extract the messages of a commonly used error log:   Creating review recommendations service case activity with errorMessage:  example message one here Creating review recommendatio... See more...
I am trying to extract the messages of a commonly used error log:   Creating review recommendations service case activity with errorMessage:  example message one here Creating review recommendations service case activity with errorMessage:  example message two over here    I want to do some graphing of counts of the totals of each individual message, so would like to extract the string and stats count by message. Having trouble extracting the string. How do I do this cleanly? The goal would be to have results for "example message one here" :  X number of results "example message two over here": Y number of results
Hi All, We wanted to do POC for our client and wanted to ingest open telemetry data logs and trace into splunk and I have following questions?  Is it possible to do them in Splunk Enterprise trail... See more...
Hi All, We wanted to do POC for our client and wanted to ingest open telemetry data logs and trace into splunk and I have following questions?  Is it possible to do them in Splunk Enterprise trail license? Or Do we need to buy Splunk Observability module to monitor the open telemetry data? Can we use universal forwarder to collect the logs and trace or do we need to have the Splunk OpenTelemetry Connector? Share the link or document to ingest the open telemetry data logs into splunk.