All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 ... See more...
I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc6285603725604350160.tmp&name=Dittmar%20-%20NO%20Contents%20-%20%20company%20Application%20(Please%20Sign)%20-%20signed&contentCreator=ALEXANDER BLANCO&mimeType=application/pdf&accountNum=09631604&policyNum=12980920&jobIdentifier=34070053 2024-06-14 09:34:45,505 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] Uploading file to pc from \\myloc\CoreTmp\app\pc\in\gwpc628560372560435 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp&name=wagnac%20%20slide%20coverage%20b&description=20% rule&contentCreator=JOSEY FALCON&mimeType=application/pdf&accountNum=09693720&policyNum=13068616 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] The Upload Service /repo/service/company/upload failed in 0.000000 seconds, null 2024-06-13 09:22:49,103 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-43] Exception from executeScript: 051333149 Failed to execute web script. org.springframework.extensions.webscripts.WebScriptException: 051333149 Failed to execute web script. at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:105) at org.repo.repo.web.scripts.RepositoryContainer.lambda$transactionedExecute$2(RepositoryContainer.java:556) at org.repo.repo.transaction.RetryingTransactionHelper.doInTransaction(RetryingTransactionHelper.java:450) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecute(RepositoryContainer.java:539) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:663) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:699) ... 23 more Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - Error at index 0 in: " r" at java.base/java.net.URLDecoder.decode(URLDecoder.java:232) at java.base/java.net.URLDecoder.decode(URLDecoder.java:142) at com.mysite.core.repo.util.RepositoryUtils.decodeValue(RepositoryUtils.java:465) at com.mysite.core.repo.BaseWebScript.getParameterMap(BaseWebScript.java:138) at com.mysite.core.repo.upload.FileUploadWebScript.executeImpl(FileUploadWebScript.java:37) at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:75) ... 47 more 2024-06-13 09:22:49,124 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-53] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/search Query String: center=cc&docId=a854dbad-af6e-43e3-af73-8ac66365e000   Now there are multiple log entries so we need to first check for the presence of this error "Illegal hex characters in escape (%) pattern". Then looking at the SessionID... in this case - [http-nio-8080-exec-43] but there can be lot of other and may be duplicate SessionID in the log, check the line starting with "Query String" with the same or close timestamp (HH:MM) and create a report like this -   AccountNumnber PolicyNumber Name Location 09693720 13068616 wagnac%20%20slide%20coverage%20b \\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp   As you can see there are two entries in the logfile for the same SessionID http-nio-8080-exec-43 but we want record only for the entry where we got 1. Error "Illegal hex characters in escape" and 2. Entry originated at 2024-06-13 09:22. We can compare _time too as request event and the error event can have difference in time. So, it will be better to search and compare with the timestamp strftime(_time, "%Y-%m-%d %H:%M"). This wau it will compare with Date, Hr, and Min. BTW we might have same error with same SessionID in the log but it has to be different timestamp. So, it is very important to Chek for time also but with the formatted one. I created one Splunk report. Inner and Outer query are able to provide results separately but when I merge and run, although it looking at the required events but not returning any data in the table -   index=myindex "Illegal hex characters in escape (%) pattern" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | eval outer_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table outer_timestamp, sessionID | join type=inner sessionID [ search index=index "Query String" AND "myloc" AND "center=pc" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | rex "accountNum=(?<AccountNum>\d+)" | rex "policyNum=(?<PolicyNum>\d+)" | rex "name=(?<Name>[^&]+)" | rex "description=(?<Description>[^&]+)" | rex "location=(?<Location>[^&]+)" | eval inner_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table sessionID, AccountNum, PolicyNum, Name, Description, Location, inner_timestamp ] | where outer_timestamp = inner_timestamp | table outer_timestamp, sessionID, AccountNum, PolicyNum, Name, Description, Location   What can be the issue? How can I get the desired result? Thanks!
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType... See more...
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType as receive only. transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"} transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"send","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}
Hello Team, I need assistance with joining 2 SPL queries to get the desired output. Refer the below log snippet: As per the log pattern there are distinct transaction id's with the ORA-00001 er... See more...
Hello Team, I need assistance with joining 2 SPL queries to get the desired output. Refer the below log snippet: As per the log pattern there are distinct transaction id's with the ORA-00001 error message. Requirement is to identify all such transactions with the error message. Please suggest.   240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: 48493009394940303 (TQ_HOST -> TQ_HOST) 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure. 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription:   I have below 2 queries with their respective output:   Query 1: index=test_index source=/test/instance ("<=== Recv'd TRN:") | rex field=_raw "\<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | table _time transaction_id Output as: _time | transaction_id Query 2: index=test_index source=/test/instance ("ORA-00001") | table _time _raw Output as: _time | _raw   I want to merge or join both the results and get the final output as below: _time | transaction_id | _raw In this case (example) 240614 04:35:50 | 48493009394940303 | ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated Please suggest what modifications to be done in the above query to get this desired result. @ITWhisperer - Kindly help.
Hi team, I have two indexers in a clustered environment and one of my colleague created a index in both the indexers (/opt/splunk/etc/apps/search/indexes.conf) not on the cluster master. This is ver... See more...
Hi team, I have two indexers in a clustered environment and one of my colleague created a index in both the indexers (/opt/splunk/etc/apps/search/indexes.conf) not on the cluster master. This is very old index and have more than 50GB of data If I add the same config in master (/opt/splunk/etc/master-apps/_cluster/local/indexes.conf) will it hamper anything. Would I lose any data.  
Hi Team, Can we compress the logs using Splunk HEC HttpEventCollectorLogbackAppender? Please guide here, how to compress the logs using splunk hec configuration in logback.
Our Splunk runs in local time, and Splunk Alerts with a Cron schedule and a cron expression such as "00 4,8,12,18 * * *" will run four times a day at the given LOCAL times. How can I tell it to run a... See more...
Our Splunk runs in local time, and Splunk Alerts with a Cron schedule and a cron expression such as "00 4,8,12,18 * * *" will run four times a day at the given LOCAL times. How can I tell it to run at UTC times?
Hello, I have splunk installed on 3 servers (searchhead, index, HF) on windows server. I upgrade from 8.2.x to 9.2.1 - on the search head and index everything is working - including the kvstore (it... See more...
Hello, I have splunk installed on 3 servers (searchhead, index, HF) on windows server. I upgrade from 8.2.x to 9.2.1 - on the search head and index everything is working - including the kvstore (it was upgraded to wiredTiger before the upgrade. BUT - on the HF the kvstore failing. In the mongoDB log file I can see: CONTROL [main] Failed global initialization: InvalidSSLConfiguration: Could not read private key attached to the selected certificate, ensure it exists and check the private key permissions splunk show kvstore-status --verbose show: This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: 127.0.0.1:8191] guid : xxxxxxxxxxxxxxxxxxxx port : 8191 standalone : 1 status : failed storageEngine : wiredTiger I tried to: Delete the server.pem file and also splunk clean kvstore --local but still the same error. Commenting out the "sslPassword" under the stanza "[sslConfig]"  in the server.conf  didn't help. The pfx file was added in the Windows certificate store - but not sure the right why. I will be happy for any help.
hello, has anyone worked with traces (generated with opentelemetry) of an application on a splunk enterprise? i am ingesting this information with opentelemetry. And i would like to exploit the inf... See more...
hello, has anyone worked with traces (generated with opentelemetry) of an application on a splunk enterprise? i am ingesting this information with opentelemetry. And i would like to exploit the information, tracking the traces...is there any add-on to visualize this data useful? Thanks and cheers   Jar
Hi Team, Please help me whit the steps to enable boot start of Splunk forwarder on oracle Linux 6.x. Splunk forwarder version- 9.0.8 Splunk version - 9.0.5   Regards, Shabana
The transforms to set sourcetypes has a bug. The regex uses a capture group that is not used in the format statment. When this is the case splunk does not return a match on the regex. To get this ... See more...
The transforms to set sourcetypes has a bug. The regex uses a capture group that is not used in the format statment. When this is the case splunk does not return a match on the regex. To get this to work it is neccessary to change the regex to a non-capturing group e.g. for: [auditdclasses2] REGEX = type\=(ANOM_|USER_AVC|AVC|CRYPTO_REPLAY_USER|RESP) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::linux:audit:ocsf:finding must be change to  REGEX = type\=(?:ANOM_|USER_AVC|AVC|CRYPTO_REPLAY_USER|RESP) Then it works. The same for the other auditdclasses1 - 6.
Hi , How to collect server logs without installing the Splunk Universal forwarder. Because the server owned team is not interested to install UF. Please let me know is any other way to collect the d... See more...
Hi , How to collect server logs without installing the Splunk Universal forwarder. Because the server owned team is not interested to install UF. Please let me know is any other way to collect the data and how?   Thanks, Karthi
I can successfully send data to Splunk Cloud using the HEC webhook via a Curl command. However, when attempting to send events from Splunk Observability to Splunk Cloud using the Generic Webhook meth... See more...
I can successfully send data to Splunk Cloud using the HEC webhook via a Curl command. However, when attempting to send events from Splunk Observability to Splunk Cloud using the Generic Webhook method, it doesn't seem to function properly.
Hi All,   From Splunk article, I know it supporting using docker / portainer hosting. Would like to check whether Spunk Enterprise support official hosting in Kubernetes?  
Does Splunk DBConnect support gMSA accounts? If so, when configuring the Splunk Identity, do I leave the password field empty?
For CIM compliance I am trying to fill the action field from some logs using a case. This works in search but not in the calculated field, I see some others had similar issues but there has not been ... See more...
For CIM compliance I am trying to fill the action field from some logs using a case. This works in search but not in the calculated field, I see some others had similar issues but there has not been an answer on here. I am on Cloud so cannot directly change the confs, but calculated fields so far have been working fine. Simple case statements that do not have multivalue fields with objects (e.g. category instead of entitis{}.remediationStatus) work as expected in calculated fields. The events have a similar setup like this: {"entities": [{"name": "somename"}, {"name": "other naem", "remediationStatus": "Prevented"}]}   Search (WORKS):   eval action=case('entities{}.remediationStatus'=="Prevented", "blocked", 'entities{}.deliveryAction'=="Blocked", "blocked", 'entities{}.deliveryAction'=="DeliveredAsSpam", "blocked", true(), "allowed")   Calculated field (Doesn't WORK): action=case('entities{}.remediationStatus'=="Prevented", "blocked", 'entities{}.deliveryAction'=="Blocked", "blocked", 'entities{}.deliveryAction'=="DeliveredAsSpam", "blocked", true(), "allowed")
Hello team , I am trying to create macro and than use in my splunk dashboard . The purpose is to get time of entered input in dashboard (in only UTC standard) irrespective of user’s time setting in ... See more...
Hello team , I am trying to create macro and than use in my splunk dashboard . The purpose is to get time of entered input in dashboard (in only UTC standard) irrespective of user’s time setting in Splunk.  My macro is : [strftime_utc(2)] args = field, format definition = strftime($field$ - (strptime(strftime($field$, \"%Y-%m-%dT%H:%M:%SZ\"), \"%Y-%m-%dT%H:%M:%S%Z\")-strptime(strftime($field$, \"%Y-%m-%dT%H:%M:%S\"), \"%Y-%m-%dT%H:%M:%S\")), \"$format$\")  and now my search looks like: *My query* | eval utc_time=`strftime_utc(_time, "%Y-%m-%dT%H:%M:%SZ")` So that always get the output in UTC standard only. But I am getting below error:  Error in 'eval' command: The expression is malformed. An unexpected character is reached at '\"%Y-%m-%dT%H:%M:%SZ\"), \"%Y-%m-%dT%H:%M:%SZ\") - strptime(strftime(_time, \"%Y-%m-%dT%H:%M:%S\"), \"%Y-%m-%dT%H:%M:%S\")), \"%Y-%m-%dT%H:%M:%SZ\"))'. How can i resolve ? Any help is appreciated. Thanks
When navigating to "ESS" -> "Data" -> "Data Availability", will get the following error: >>> Error in 'lookup' command: Could not construct lookup 'SSE-data_availability_latency_status.csv, product... See more...
When navigating to "ESS" -> "Data" -> "Data Availability", will get the following error: >>> Error in 'lookup' command: Could not construct lookup 'SSE-data_availability_latency_status.csv, productId'. See search.log for more details. <<< I can find the definition of  SSE-data_availability_latency_status in "lookup" -> "lookup definitions". However, it looks the SSE-data_availability_latency_status.csv doesn't exist. >>> | inputlookup SSE-data_availability_latency_status.csv --> The lookup table 'SSE-data_availability_latency_status.csv' requires a .csv or KV store lookup definition. <<< I'm using Splunk cloud 9.1.2312.102 and ESS 3.8.0. Thanks for your reply in advance!  
I'd like to monitor log files and ingest specific lines from these files. My props.conf and transforms.conf has no error. But for some reason the props.conf is not working and instead of indexing spe... See more...
I'd like to monitor log files and ingest specific lines from these files. My props.conf and transforms.conf has no error. But for some reason the props.conf is not working and instead of indexing specific lines , it is indexing the whole log. Is there any specific path to place .conf files, or any other solution?
We had a Nessus scan but Nessus configuration was not completed on tenable add-on on the splunk side. Hence we missed the scan and data was not onboarded to splunk now we want to get data back , so h... See more...
We had a Nessus scan but Nessus configuration was not completed on tenable add-on on the splunk side. Hence we missed the scan and data was not onboarded to splunk now we want to get data back , so how should we get that data back?
Hello everyone, We are currently running Splunk Enterprise version 9.0.6 on a Windows Server 2016 machine as part of a distributed Splunk environment. Due to compliance requirements, we need to upgr... See more...
Hello everyone, We are currently running Splunk Enterprise version 9.0.6 on a Windows Server 2016 machine as part of a distributed Splunk environment. Due to compliance requirements, we need to upgrade to at least version 9.1.4. However, Splunk Enterprise 9.1.4 officially lists Windows Server 2019 as a prerequisite. I have tested the upgrade in our lab environment on Windows Server 2016, and it appears to work without any immediate issues. Despite this, I am concerned about potential unforeseen impacts or compatibility problems since the official documentation recommends Windows Server 2019. Additionally, our OS team has advised that upgrading the OS from Windows Server 2016 to 2019 could potentially corrupt the servers, necessitating a rebuild. My boss is understandably reluctant to take this risk, especially since the current server is planned for retirement by the end of this year. Has anyone else performed a similar upgrade on Windows Server 2016 within a distributed Splunk environment? Are there any known issues or potential risks we should be aware of? Any insights or experiences would be greatly appreciated.