All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, some events are not parsed correctly and not splitted only when there is timestamp especially with "slow" events.  
Need help with regex for below data. Please assist me on the same. field name -------- fieldvalue Devicename------GNTESTFS1 Sample data Jan 5 15:34:18 7.73.151.197 1 2023-01-05T14:34:17Z 1... See more...
Need help with regex for below data. Please assist me on the same. field name -------- fieldvalue Devicename------GNTESTFS1 Sample data Jan 5 15:34:18 7.73.151.197 1 2023-01-05T14:34:17Z 1.2.3.44 StorageArray - - [0@0] GNTESTFS1;2288;Critical;Either the NTP server's resolved or configured IP address is wrong or the IP address is unavailable via the attached network Jan 5 15:31:20 7.73.151.197 1 2023-01-05T14:31:19Z 1.2.3.44 StorageArray - - [0@0] GNTESTFS1;2288;Critical;Either the NTP server's resolved or configured IP address is wrong or the IP address is unavailable via the attached network Jan 5 09:32:37 7.73.151.197 1 2023-01-05T08:32:36Z 1.2.3.44 StorageArray - - [0@0] GNTESTFS1;2288;Critical;Either the NTP server's resolved or configured IP address is wrong or the IP address is unavailable via the attached network Thanks in advance  
Hello Splunkers, I faced the following issue : I deployed an app on a UF, this app should monitor a specific file in my machine let's say /<my_file> The thing is I'm running Splunk service as ... See more...
Hello Splunkers, I faced the following issue : I deployed an app on a UF, this app should monitor a specific file in my machine let's say /<my_file> The thing is I'm running Splunk service as a non root user (splunk user) and this user does not have permission to read this file. I know how to solve this with setfacl command, but how could I spot this issue in the first place ? I thought that this permission error would have been visible in splunkd.log but it's not the case... I am trying to find a way to monitor the other possible "permissions denied" errors without manually log in as the splunk user and try to open the specific files. Thanks a lot, GaetanVP
Hi, I want to check if all the value (from different fields) are < a, it will mark as yes. If one of them > a, it will be "no".  Knowing that it's not always have 3 values (some id has only val... See more...
Hi, I want to check if all the value (from different fields) are < a, it will mark as yes. If one of them > a, it will be "no".  Knowing that it's not always have 3 values (some id has only value1 or (value1 and value2)), this eval will give nothing in the result.     |eval test=if(value1<a and value2<a and value3<a, "yes", "no")      I'm searching for a way to take into account only when a value is not null.     |eval test=if(isnotnull(value1)<a and isnotnull(value2)<a and isnotnull(value3)<a, "yes", "no")     but I have this error: Error in 'eval' command: Type checking failed. The '<' operator received different types.
I have splunk cloud url : https://prd-p-9alo5.splunkcloud.com username : sc_admin
i need to extract fields which are in json format i have been trying using spath command for extracting the following fields which are under log. But not able to fetch it. I am failing somewhere. He... See more...
i need to extract fields which are in json format i have been trying using spath command for extracting the following fields which are under log. But not able to fetch it. I am failing somewhere. Here is the example of my data: {"log":"[18:15:21.888] [INFO ] [] [c.c.n.t.e.i.T.ServiceCalloutEventData] [akka://MmsAuCluster/user/$b/workMonitorActor/$M+c] - channel=\"AutoNotification\", productVersion=\"2.3.3-0-1-eb5b8cadd\", apiVersion=\"V1\", uuid=\"0b8549ff-1f14-4fd5-99c5-b3f2240d7da8\", eventDateTime=\"2023-01-06T07:15:21.888Z\", severity=\"INFO\", code=\"ServiceCalloutEventData\", component=\"web.client\", category=\"integrational-external\", serviceName=\"Consume Notification\", eventName=\"MANDATE_NOTIFICATION_RETRIEVAL.CALLOUT_REQUEST\", message=\"Schedule Job start, getNotification request\", entityType=\"MNDT\", externalSystem=\"SWIFTPAG\", start=\"1672989321888\", url=\"https://sandbox.swift.com/npp-mms/v1/subscriptions/29fbe070057811eca4fa68aa418f5c2a/notifications\", swiftMessagePartnerBIC=\"RESTMP01\", messageIdentification=\"e1f24a3b8d9111edb3368d1476d87136\", subscriptionIdentification=\"29fbe070057811eca4fa68aa418f5c2a\" producer=com.clear2pay.na.mms.au.notification.batch.GetNotificationService \n","stream":"stdout","docker":{"container_id":"89efc58c0a343ee01daa2fcdeadb3b952599f0c142fb7041f95a9d6702fe49d2"},"kubernetes":{"container_name":"mms-au","namespace_name":"msaas-t4","pod_name":"mms-au-b-1-54b4589f89-g74lp","container_image":"pso.docker.internal.cba/mms-au:2.3.3-0-1-eb5b8cadd","container_image_id":"docker-pullable://pso.docker.internal.cba/mms-au@sha256:9d48d5af268d28708120ee3f69b576d371b5e603a0e0c925c7dba66058654819","pod_id":"b474ec16-fc9f-4b7a-9319-8302c0185f83","pod_ip":"100.64.87.219","host":"ip-10-3-197-177.ap-southeast-2.compute.internal","labels":{"app":"mms-au","dc":"b-1","pod-template-hash":"54b4589f89","release":"mms-au"},"master_url":"https://172.20.0.1:443/api","namespace_id":"48ee871a-7e60-45c4-b0f4-ee320a9512f5","namespace_labels":{"argocd.argoproj.io/instance":"appspaces","ci":"CM0953076","kubernetes.io/metadata.name":"msaas-t4","name":"msaas-t4","platform":"PSU","service_owner":"somersd","spg":"CBA_PAYMENTS_TEST_COORDINATION"}},"hostname":"ip-10-3-197-177.ap-southeast-2.compute.internal","host_ip":"10.3.197.177","cluster":"nonprod/pmn02"}   i need to extract few events which are under log.Can anyone help me on this.   Thanks in Advance
Hello, I'm using stats list() to merge all my value into one field, but I want them to seperate with each other by ";" instead of space. Example USER_PHONE 123 456 789 ... See more...
Hello, I'm using stats list() to merge all my value into one field, but I want them to seperate with each other by ";" instead of space. Example USER_PHONE 123 456 789   When I use |stats list(USER_PHONE) the result I receive( in the csv that I output) was 123 456 789 The result that I want is 123;456;789 I try to use ... | rex mode=sed field=USER_PHONE "s/ /;/g" But it have no effect, what should I do?
Hi all, I have a inputlookup file named as leavers.csv which ill be automatically update this file contain the userID   I will need to use the userID and retrieve the user email from index=zs... See more...
Hi all, I have a inputlookup file named as leavers.csv which ill be automatically update this file contain the userID   I will need to use the userID and retrieve the user email from index=zscaler from there i will need to search in the index=exomsgtrace to determine if there is any outbound email from the users listed in the leavers.csv   Can i get your help to construct all the requirement into a single query.
looking for a query to convert the results like this I have a search to produce report using appendcols a | b | c 5785|5731|100 want to get the report like this, basically trying to format t... See more...
looking for a query to convert the results like this I have a search to produce report using appendcols a | b | c 5785|5731|100 want to get the report like this, basically trying to format the name of the fields along with apply sum/diff Total of messagea | Total of messageb | Total of messagec | Diff of Total a and total b 5785|5731|100|54 This is the current query  index!= "internal " sourcetype="a" "messagea" | stats count as a | appendcols [search index!= "internal" sourcetype="b" "messageb" | stats count as b ] | appendcols [search index!= "internal" sourcetype="c" "messagec" | stats count as c ]
I am running | rest /services/search/jobs command to check my failed searches for last 24 hrs. But I see that some of the searches are not getting captured. I wanted to know how long in past does res... See more...
I am running | rest /services/search/jobs command to check my failed searches for last 24 hrs. But I see that some of the searches are not getting captured. I wanted to know how long in past does rest command searches the data. Does it bring up results only for last few hours or few days ?
Hello folks, Need your help. Here is the splunkd.log file with grep kvstore. Please review and advise what went wrong and what needs to be done to fix this issue.  
Hi  I am trying to integrate log4j with splunk as shown below and I am getting error - Log4j2-TF-1-AsyncLoggerConfig-1 ERROR Unable to send HTTP in appender [httptest] java.io.IOException: 401 Unau... See more...
Hi  I am trying to integrate log4j with splunk as shown below and I am getting error - Log4j2-TF-1-AsyncLoggerConfig-1 ERROR Unable to send HTTP in appender [httptest] java.io.IOException: 401 Unauthorized. <Http name="httptest" url="https://prd-p-xtpce.splunkcloud.com:8088/services/collector/raw"> <Property name="Authorization" value="Splunk xxxxxxx-998c-4547-beea-xxxxxx"/> <Property name="disableCertificateValidation" value="true"/> <JsonLayout properties="true"/> </Http>   The token is valid and able to post data from postman.   Thanks
I want to change the color of title and description to black. I can't change it in Dashboard Studio, so I think I need to change it from the original.   [original source]   Help me ........ See more...
I want to change the color of title and description to black. I can't change it in Dashboard Studio, so I think I need to change it from the original.   [original source]   Help me ...... T. T
so I configured a 4 member search head cluster successfully with a captain and a deployer server however when I got to deploy a simple app such as the GlobalBanner it aint working per this doc Use t... See more...
so I configured a 4 member search head cluster successfully with a captain and a deployer server however when I got to deploy a simple app such as the GlobalBanner it aint working per this doc Use the deployer to distribute apps and configuration updates  I create the following subdirectory and conf file on my Deployer server C:\Program Files\Splunk\etc\shcluster\apps\GlobalBanner\local\global-banner.conf   and I push the bundle using this command from the Deployer server splunk apply shcluster-bundle -target https://MySearchHeadCaptain.MyDomain.net:8089   and the file ends up in the following directory on the search head cluster members C:\Program Files\Splunk\etc\apps\GlobalBanner\default\gloval-banner.conf Why does it end up in the default sub folder?   My Deployer server is setup with the push mode set to full like so  [shclustering] pass4SymmKey = $7$SoMeJuNk= shcluster_label = shcluster1 deployer_push_mode = full   1,000 thanks to anyone who can set me straight
Hey i want to setup stream add-on on splunk for a distributed environment where I have  one HF, one Indexer and one SH , I want to setup for forwarding DNS logs only. Can someone tell me how to confi... See more...
Hey i want to setup stream add-on on splunk for a distributed environment where I have  one HF, one Indexer and one SH , I want to setup for forwarding DNS logs only. Can someone tell me how to configure and setup??  
Hi  I am trying to upgrade my SPLUNK environment from 7.x to 8.1.9 I want to make sure if my universal fowarder which is 6.5.3 is compatibile with 8.1.9   Thanks Suresh
Hello everyone,  I have a search for after hour logins between 6pm and 6am. Right now I have event codes 4625 and 4624 with logon_type 2 and 3. This alert picks up windows automated services, but I ... See more...
Hello everyone,  I have a search for after hour logins between 6pm and 6am. Right now I have event codes 4625 and 4624 with logon_type 2 and 3. This alert picks up windows automated services, but I was wondering if there was a way that I can have this search only pick up on user accounts instead of windows automated services. My search string is    index=(myindexname) source="wineventlog:security" Account_Name=* EventCode=4625 OR EventCode=4624 Logon_Type=2 OR Logon_Type=2 Logon_Process=Kerberos earliest=-7@d-6h latest=-7d@d+6h  
Failed to contact license manager: reason='Unable to connect to license manager=https://SplunkInstance01.MyDomain.net:8089 Error connecting:error:14090086:SSL routines:ssl3_get_server_certificate:ce... See more...
Failed to contact license manager: reason='Unable to connect to license manager=https://SplunkInstance01.MyDomain.net:8089 Error connecting:error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.', first failure time= [...] first mea culpa for asking this as I am sure it has been asked before but I couldn't quite understand how to use the openssl verify command, when I try to run it I get this error: C:\Program Files\Splunk\etc\auth\distServerKeys>openssl verify -CAfile private.pem trusted.pem WARNING: can't open config file: C:\\jnkns\\workspace\\build-home/ssl/openssl.cnf Error loading file private.pem I also tried to run it from the bin subdirectory, home of the openssl utility  C:\Program Files\Splunk\bin>openssl verify -CAfile "C:\Program Files\Splunk\etc\auth\distServerKeys\private.pem" "C:\Program Files\Splunk\etc\auth\distServerKeys\trusted.pem" WARNING: can't open config file: C:\\jnkns\\workspace\\build-home/ssl/openssl.cnf Error loading file C:\Program Files\Splunk\etc\auth\distServerKeys\private.pem I suspect this private \ public key pair combination may still be the stale default self signed combination cause Splunk to frown upon it, however what is perplexing to me is that it works on the other 20 plus servers, so I throw yourself upon your mercy for help please note we have over 2 dozen Splunk servers running version 9.0.0 all on Windows platforms and this is the only server getting this error, all servers use our own Microsoft CA internal Enterprise certificates based on a two tier (RootCA \ IntermediateCA architecture) so I think I know what I am doing ha ha at least in terms of certing  
I am setting up an alert to notify when a message is received more than a 100 times in a week. I figured it out for the total, but not within a week time range. Any help is appreciated.       'Bi... See more...
I am setting up an alert to notify when a message is received more than a 100 times in a week. I figured it out for the total, but not within a week time range. Any help is appreciated.       'Bitgo webhook error' | stats count as Bitgo_Webhook_Errors | where Bitgo_Webhook_Errors >=100    
Hi! Long time listener, first time caller here. Our custom search command needs some slow initialization, which we would prefer to skip on repeated calls. Is there a way to keep the process alive a... See more...
Hi! Long time listener, first time caller here. Our custom search command needs some slow initialization, which we would prefer to skip on repeated calls. Is there a way to keep the process alive after the first invocation and only call dispatch() afterwards? Something that does what scripttype = persist does for REST endpoints. My understanding is that the protocol would allow this. Calling dispatch() in a loop sadly doesn't work - would've been too easy, huh? (It does work in so far that the command finishes - it's just that the script gets started again the next time)