All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Per the Container automation API docs , "the update API is supported from within a custom function". However for the following code, the "Validate" fails with "Undefined variable 'container' " updat... See more...
Per the Container automation API docs , "the update API is supported from within a custom function". However for the following code, the "Validate" fails with "Undefined variable 'container' " update_data = {} update_data['name'] = 'new container name' phantom.update(container, update_data) What is the fix?
Does anyone know how to reach Splunk Sales in the US? Is there a new process to reach them, now that they're part of Cisco?  I've been trying to reach them for over a month. I've submitted contact f... See more...
Does anyone know how to reach Splunk Sales in the US? Is there a new process to reach them, now that they're part of Cisco?  I've been trying to reach them for over a month. I've submitted contact forms on the website. No response. Not even an automated response. Calls to 1 866.GET.SPLUNK goes to voice mail. Thanks.    
I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 ... See more...
I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc6285603725604350160.tmp&name=Dittmar%20-%20NO%20Contents%20-%20%20company%20Application%20(Please%20Sign)%20-%20signed&contentCreator=ALEXANDER BLANCO&mimeType=application/pdf&accountNum=09631604&policyNum=12980920&jobIdentifier=34070053 2024-06-14 09:34:45,505 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] Uploading file to pc from \\myloc\CoreTmp\app\pc\in\gwpc628560372560435 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp&name=wagnac%20%20slide%20coverage%20b&description=20% rule&contentCreator=JOSEY FALCON&mimeType=application/pdf&accountNum=09693720&policyNum=13068616 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] The Upload Service /repo/service/company/upload failed in 0.000000 seconds, null 2024-06-13 09:22:49,103 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-43] Exception from executeScript: 051333149 Failed to execute web script. org.springframework.extensions.webscripts.WebScriptException: 051333149 Failed to execute web script. at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:105) at org.repo.repo.web.scripts.RepositoryContainer.lambda$transactionedExecute$2(RepositoryContainer.java:556) at org.repo.repo.transaction.RetryingTransactionHelper.doInTransaction(RetryingTransactionHelper.java:450) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecute(RepositoryContainer.java:539) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:663) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:699) ... 23 more Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - Error at index 0 in: " r" at java.base/java.net.URLDecoder.decode(URLDecoder.java:232) at java.base/java.net.URLDecoder.decode(URLDecoder.java:142) at com.mysite.core.repo.util.RepositoryUtils.decodeValue(RepositoryUtils.java:465) at com.mysite.core.repo.BaseWebScript.getParameterMap(BaseWebScript.java:138) at com.mysite.core.repo.upload.FileUploadWebScript.executeImpl(FileUploadWebScript.java:37) at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:75) ... 47 more 2024-06-13 09:22:49,124 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-53] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/search Query String: center=cc&docId=a854dbad-af6e-43e3-af73-8ac66365e000   Now there are multiple log entries so we need to first check for the presence of this error "Illegal hex characters in escape (%) pattern". Then looking at the SessionID... in this case - [http-nio-8080-exec-43] but there can be lot of other and may be duplicate SessionID in the log, check the line starting with "Query String" with the same or close timestamp (HH:MM) and create a report like this -   AccountNumnber PolicyNumber Name Location 09693720 13068616 wagnac%20%20slide%20coverage%20b \\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp   As you can see there are two entries in the logfile for the same SessionID http-nio-8080-exec-43 but we want record only for the entry where we got 1. Error "Illegal hex characters in escape" and 2. Entry originated at 2024-06-13 09:22. We can compare _time too as request event and the error event can have difference in time. So, it will be better to search and compare with the timestamp strftime(_time, "%Y-%m-%d %H:%M"). This wau it will compare with Date, Hr, and Min. BTW we might have same error with same SessionID in the log but it has to be different timestamp. So, it is very important to Chek for time also but with the formatted one. I created one Splunk report. Inner and Outer query are able to provide results separately but when I merge and run, although it looking at the required events but not returning any data in the table -   index=myindex "Illegal hex characters in escape (%) pattern" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | eval outer_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table outer_timestamp, sessionID | join type=inner sessionID [ search index=index "Query String" AND "myloc" AND "center=pc" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | rex "accountNum=(?<AccountNum>\d+)" | rex "policyNum=(?<PolicyNum>\d+)" | rex "name=(?<Name>[^&]+)" | rex "description=(?<Description>[^&]+)" | rex "location=(?<Location>[^&]+)" | eval inner_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table sessionID, AccountNum, PolicyNum, Name, Description, Location, inner_timestamp ] | where outer_timestamp = inner_timestamp | table outer_timestamp, sessionID, AccountNum, PolicyNum, Name, Description, Location   What can be the issue? How can I get the desired result? Thanks!
Hi @anil1219 , this seeems to be a json format so you could use the INDEXED_EXTRACTION = JSON in the sourcetype definition in props.conf (https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Pro... See more...
Hi @anil1219 , this seeems to be a json format so you could use the INDEXED_EXTRACTION = JSON in the sourcetype definition in props.conf (https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Propsconf) or the spath command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath). Otherwise, you could use a regex like the following: rex "\"PaymentType\":\"(?<PaymentType>[^\"]+)" the you can test at https://regex101.com/r/VEeiyG/1 Ciao. Giuseppe
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType... See more...
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType as receive only. transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"} transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"send","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}
Alternatively, you could use streamstats to build a list of files to match against: index=wealth OR index=transform-file OR index=ace_message earliest=-30m | rex field=_raw "inputFileName: (?<inputF... See more...
Alternatively, you could use streamstats to build a list of files to match against: index=wealth OR index=transform-file OR index=ace_message earliest=-30m | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=inputFileName "file\_(?<ID>\d+_\d+)\_" | streamstats values(eval(if(now()-_time<1800,ID,NULL))) as IDS | eval alertable=if((now()-_time>1800) AND (ID IN (IDS)),"True","False") | table _time, ID, IDS, alertable
Besides the approximate time (since the times don't match), there is nothing else to relate those two particular logs together? Will your search be used in the general case to output more than one... See more...
Besides the approximate time (since the times don't match), there is nothing else to relate those two particular logs together? Will your search be used in the general case to output more than one row's worth of data? If so, how far apart are the various distinct transactions (or can that be arbitrarily short)?
Hello Team, I need assistance with joining 2 SPL queries to get the desired output. Refer the below log snippet: As per the log pattern there are distinct transaction id's with the ORA-00001 er... See more...
Hello Team, I need assistance with joining 2 SPL queries to get the desired output. Refer the below log snippet: As per the log pattern there are distinct transaction id's with the ORA-00001 error message. Requirement is to identify all such transactions with the error message. Please suggest.   240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: 48493009394940303 (TQ_HOST -> TQ_HOST) 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure. 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription:   I have below 2 queries with their respective output:   Query 1: index=test_index source=/test/instance ("<=== Recv'd TRN:") | rex field=_raw "\<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | table _time transaction_id Output as: _time | transaction_id Query 2: index=test_index source=/test/instance ("ORA-00001") | table _time _raw Output as: _time | _raw   I want to merge or join both the results and get the final output as below: _time | transaction_id | _raw In this case (example) 240614 04:35:50 | 48493009394940303 | ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated Please suggest what modifications to be done in the above query to get this desired result. @ITWhisperer - Kindly help.
Boy am I glad to have found this thread.  Got my problem solved, thank you so much
Oh, I see. You could use a subsearch or a join:   index=wealth OR index=transform-file OR index=ace_message earliest=-30m | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=inp... See more...
Oh, I see. You could use a subsearch or a join:   index=wealth OR index=transform-file OR index=ace_message earliest=-30m | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=inputFileName "file\_(?<ID>\d+_\d+)\_" | table ID | join type=inner left=L right=R where L.ID=R.ID [search index=wealth OR index=transform-file OR index=ace_message earliest=-30d latest=-30m | rex field=inputFileName "file\_(?<ID>\d+_\d+)\_" | table ID]  
Hi team, I have two indexers in a clustered environment and one of my colleague created a index in both the indexers (/opt/splunk/etc/apps/search/indexes.conf) not on the cluster master. This is ver... See more...
Hi team, I have two indexers in a clustered environment and one of my colleague created a index in both the indexers (/opt/splunk/etc/apps/search/indexes.conf) not on the cluster master. This is very old index and have more than 50GB of data If I add the same config in master (/opt/splunk/etc/master-apps/_cluster/local/indexes.conf) will it hamper anything. Would I lose any data.  
Hi Team, Can we compress the logs using Splunk HEC HttpEventCollectorLogbackAppender? Please guide here, how to compress the logs using splunk hec configuration in logback.
Clearing all bookmarks and data mappings (reset all configurations) and doing a force reset allowed the security content page to load.  preforming a data mapping immediately kills access to the secur... See more...
Clearing all bookmarks and data mappings (reset all configurations) and doing a force reset allowed the security content page to load.  preforming a data mapping immediately kills access to the security content page.
It seems I made a mistake and kept the action= in the calculated field, which is why it didn't work. Next to that while testing more it is important to wait for the test searches to time out of the ... See more...
It seems I made a mistake and kept the action= in the calculated field, which is why it didn't work. Next to that while testing more it is important to wait for the test searches to time out of the caching or to change the searches.
Hi @SamHelp , you don't have any evidence of the LB configuration on the HFs: the clients point to the VIP and the Load Balancer distribute load between the HFs. Rememeber to configure the LB in tr... See more...
Hi @SamHelp , you don't have any evidence of the LB configuration on the HFs: the clients point to the VIP and the Load Balancer distribute load between the HFs. Rememeber to configure the LB in transparent Mode to avoid to have as host the hostname of the LB. I suppose that you're speaking of syslogs or HEC, not Un iversal Forwarders that don't need the LB. Ciao. Giuseppe
Hi @SaintNick , Splunk uses the timezone of the operative system, but in the interface displays data considering the user timezone, but anyway cron remains the one of the OS. The only way is consid... See more...
Hi @SaintNick , Splunk uses the timezone of the operative system, but in the interface displays data considering the user timezone, but anyway cron remains the one of the OS. The only way is consider this in the cron definition, I don't know a method to apply timezones to the cron. Ciao. Giuseppe
Our Splunk runs in local time, and Splunk Alerts with a Cron schedule and a cron expression such as "00 4,8,12,18 * * *" will run four times a day at the given LOCAL times. How can I tell it to run a... See more...
Our Splunk runs in local time, and Splunk Alerts with a Cron schedule and a cron expression such as "00 4,8,12,18 * * *" will run four times a day at the given LOCAL times. How can I tell it to run at UTC times?
Hi @karthi2809 , are you speking of a Windows or a Linux server? if Windows you can use WMI : https://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorWMIdata if Linux, you can use syslogs:... See more...
Hi @karthi2809 , are you speking of a Windows or a Linux server? if Windows you can use WMI : https://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorWMIdata if Linux, you can use syslogs:  https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports Even if Universal Forwarder is much more efficient anf doesn't give any issue and very trascurable load on the machine. Ciao. Giuseppe
Hi @rdhdr , sorry but I don't understand what you mean with "Time restrictions" You have to define a time period for yout check in which you can have Start and End events. Obviously you could have... See more...
Hi @rdhdr , sorry but I don't understand what you mean with "Time restrictions" You have to define a time period for yout check in which you can have Start and End events. Obviously you could have events started before where the Start Event isn't in the time frame, but it's an issue inside the Splunk approach: you must define the time period to execute your searches. Eventually you could use a larger time period. Ciao. Giuseppe
Really nobody who came across this issue?