All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First Splunk query gives me a value in a table. The value is a jobId. I want to use this jobId in another search query like a second one. Can we join them in Splunk way? index=myindex cs2k_transact... See more...
First Splunk query gives me a value in a table. The value is a jobId. I want to use this jobId in another search query like a second one. Can we join them in Splunk way? index=myindex cs2k_transaction_id_in_error="CHG063339403031900 major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | table jobId index=myindex "TTY"  "jobId"
I don't know this area well, but the error suggests an issue with "container", and not "update". Within your custom function you are using container, but it would seem it's not defined. How are you p... See more...
I don't know this area well, but the error suggests an issue with "container", and not "update". Within your custom function you are using container, but it would seem it's not defined. How are you passing "container" into your function? 
Hello, I have a case where I need to do regex  and I built my regex using regex101, everything works great and catchs everything there But I encountred an issue where splunk won't accept optional gr... See more...
Hello, I have a case where I need to do regex  and I built my regex using regex101, everything works great and catchs everything there But I encountred an issue where splunk won't accept optional groups "(\\\")?", it'll give the error of unmatched closing parenthesis until you add another closing bracket like so: "(\\\"))?" And another issue I encountred is after I add this closing bracket, the regex will work, but not consistently. Here's what I mean: That's a part of my regex:     \[\{(\\)?\"PhoneNumber(\\)?\":(\\)?\"(?<my_PhoneNumber>[^\\\"]+     Won't work until I add more brackets to the optional groups like I mentioned before:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+       second issue: adding another part will still work:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+)\S+OtherPhoneNumber(\\))?\":(\\))?(\"))?(?<myother_PhoneNumber>[^,\\\"]+|null)       Adding a third part with the exact same format as the second part won't, will give the error of unmatched closing parenthesis again:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+)\S+OtherPhoneNumber(\\))?\":(\\))?(\"))?(?<myother_PhoneNumber>[^,\\\"]+|null)\S+Email(\\))?\":(\\))?(\"))?(?<email>[^,\\\"]+|null)       Am I missing something? I know the regex itself works   Sample data of the original log:   [{"PhoneNumber":"+1 450555338","AlternativePhoneNumber":null,"Email":null,"VoiceOnlyPhoneNumber":null}]   [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNumber\":\"+1 455255697\",\"Email\":\"Dam@test.com.us\",\"VoiceOnlyPhoneNumber\":null}]"}   [{\"PhoneNumber\":\"+1 459551561\",\"AlternativePhoneNumber\":\"+1 6155555533\",\"Email\":null,\"VoiceOnlyPhoneNumber\":\"+1 455556868\"}]
I created a Python script that successfully links episodes with my 3rd party ticketing system. I'm trying to populate that ticket system with some of the "common field" values associated with a given... See more...
I created a Python script that successfully links episodes with my 3rd party ticketing system. I'm trying to populate that ticket system with some of the "common field" values associated with a given episode but I don't see a good way to do that?  Anyone have any hints on how to accomplish this? I'm probably missing something very obvious in the documentation.   thx!
I have 2 different splunk apps, one is a TA and the other is an app.  TA : uses modular input to connect with a data source. There are some logs and metadata that are pulled from the data source. Lo... See more...
I have 2 different splunk apps, one is a TA and the other is an app.  TA : uses modular input to connect with a data source. There are some logs and metadata that are pulled from the data source. Logs are pulled via syslog by providing a tcp input and metadata via api key and secret. The metadata is stored in kv stores.  App: is supposed to be installed on search heads and they support dashboards/reports that make use of the logs and metadata sent by HF. For splunk enterprise, the above approach works when HF has the context of search heads, because HF takes care of uploading the kv stores to the search heads via scheduled search. This ensures that the app residing on SH has the data to work with. However, on splunk cloud, once TA is installed , how to ensure that SH nodes have metadata to work with? Can we find out what are the search head fqdns so that kv stores can be copied there via scheduled search?
Can you post some dataset as well as test time that you think should yield results but did not? (To eliminate the complexity of the test, you can compare with a fixed epoch time instead of now().)  I... See more...
Can you post some dataset as well as test time that you think should yield results but did not? (To eliminate the complexity of the test, you can compare with a fixed epoch time instead of now().)  I ran the following and your where command gives 2 to 3 outputs depending on when in the calendar minute the emulation runs.   | makeresults count=10 | streamstats count as offset | eval _time = relative_time(_time, "-" . offset . "min"), eventStartsFrom = relative_time(_time, "+" . (10 - offset) . "min"), eventEndsAt = relative_time(eventStartsFrom, "+5min") | eval _time = now() ``` data emulation abvove ``` | fieldformat eventStartsFrom = strftime(eventStartsFrom, "%F %T") | fieldformat eventEndsAt = strftime(eventEndsAt, "%F %T") | where eventStartsFrom <= now() and eventEndsAt >= now()   One sample output is _time eventEndsAt eventStartFrom offset 2024-06-14 13:49:36 2024-06-14 13:54:36 2024-06-14 13:49:36 5 2024-06-14 13:49:36 2024-06-14 13:52:36 2024-06-14 13:47:36 6 2024-06-14 13:49:36 2024-06-14 13:50:36 2024-06-14 13:45:36 7 another output is _time eventEndsAt eventStartFrom offset 2024-06-14 13:53:11 2024-06-14 13:56:12 2024-06-14 13:51:12 6 2024-06-14 13:53:11 2024-06-14 13:54:12 2024-06-14 13:49:12 7 The final output uses _time field to display now().
I have this question as a reference: Splunk Question  I have one indexer, SH, and one forwarder. At some point, I had sent data from the Forwarder to the indexer and it was searchable from the SH. A... See more...
I have this question as a reference: Splunk Question  I have one indexer, SH, and one forwarder. At some point, I had sent data from the Forwarder to the indexer and it was searchable from the SH. After a few runs, I had received this error:      Search not executed: The minimum free disk space (5000MB) reached for /export/opt/splunk/var/run/splunk/dispatch. user=admin., concurrency_category="historical", concurrency_context="user_instance-wide", current_concurrency=0, concurrency_limit=5000     This was because our root filesystem was only 20 gigs and after a few searches it had dropped below 5 gigs. So at this point i was wanting to remount and move around a few things and move splunk into a larger FS. So i did, and since then, I have fully removed and reinstalled splunk and remounted it multiple times on each server, and some of these issues persist. Currently, The search Head isnt wanting to connect and search the indexer. Storage and server resources are fine, and i can connect through telnet over port 8089, but the replication keeps failing.  I keep receiving error:     Bundle replication to peer named dc1nix2dxxx at uri https://dc1nix2dxxx:8089 was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_DATA_TRANSMIT_FAILURE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. Expected common latest bundle version on all peers after sync replication, found none. Reverting to old behavior - using most recent bundles on all     I can connect and do everything else on the server through the backend, and i can telnet between the two, so im not sure what to do. Most everything i keep seeing has me check settings under distributed environment. And most of the time, under those setting, it says due to our license we arent allowed to access those settings. All i have set is one search peer, dc1nix2pxxx:8089.   Some sources say its an issue with the web.conf settings, but i dont have a web.conf under my local, and if i did, what should that look like? I just have three servers im working with.  Id appreciate any help or guidance in the right direction.    Thank you
Just tried making AND in Upper case, but didnt wrk 
I have been working on our Splunk Dev environment, and since then, I have reinstalled and uninstalled Splunk many times. I had a question as to why even on a fresh install, the apps, and a few other ... See more...
I have been working on our Splunk Dev environment, and since then, I have reinstalled and uninstalled Splunk many times. I had a question as to why even on a fresh install, the apps, and a few other artifacts remain? Once i wipe all traces of splunk off a server, I would think that upon reinstall, it would be a fresh start. yet, some of the GUI settings remain, and even some apps on the specific servers remain.  I have one dev indexer, SH, and Forwarder. We have specific apps that i have installed for people months ago, and since then, have rm -rf all traces that I could find of splunk, and yet, upon reinstall of splunk, I still see those apps under /SPLUNK_HOME/etc/apps. I have the same tar that i am unzipping on each server. yet, things like that persist across the servers.    My question is, what is storing that info? For example, the app BeyondTrust-PMCloud-Integration/, located under /export/opt/splunk/etc/apps, persists throughout two or three reinstalls of splunk. Is the FS storing data about the Splunk install even after i rm -rf all of /export/opt/splunk?  Im trying to fix some annoying issues for replication and such by just resetting the servers, since i am building them from ground up, but these servers are still retaining some stuff. I decided to redo Splunk dev after we kept having issues with the old Dev environment. I was wanting a completely fresh start, but it seems as if Splunk retains some things even after a full reset. So im not sure if some problems are still persisting because something from a previous install is still floating around somewhere. Thanks for any help
Both are set in the events as a field
This requirement was solved with the following syntax: index = indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval diff=now()-_time | eval type=case(EVENT=="START","START",EVENT=... See more...
This requirement was solved with the following syntax: index = indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval diff=now()-_time | eval type=case(EVENT=="START","START",EVENT="END","END") | eventstats dc(type) as dc_type by UID | search dc_type=1 AND (type=START AND diff>300)
Per the Container automation API docs , "the update API is supported from within a custom function". However for the following code, the "Validate" fails with "Undefined variable 'container' " updat... See more...
Per the Container automation API docs , "the update API is supported from within a custom function". However for the following code, the "Validate" fails with "Undefined variable 'container' " update_data = {} update_data['name'] = 'new container name' phantom.update(container, update_data) What is the fix?
Does anyone know how to reach Splunk Sales in the US? Is there a new process to reach them, now that they're part of Cisco?  I've been trying to reach them for over a month. I've submitted contact f... See more...
Does anyone know how to reach Splunk Sales in the US? Is there a new process to reach them, now that they're part of Cisco?  I've been trying to reach them for over a month. I've submitted contact forms on the website. No response. Not even an automated response. Calls to 1 866.GET.SPLUNK goes to voice mail. Thanks.    
I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 ... See more...
I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc6285603725604350160.tmp&name=Dittmar%20-%20NO%20Contents%20-%20%20company%20Application%20(Please%20Sign)%20-%20signed&contentCreator=ALEXANDER BLANCO&mimeType=application/pdf&accountNum=09631604&policyNum=12980920&jobIdentifier=34070053 2024-06-14 09:34:45,505 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] Uploading file to pc from \\myloc\CoreTmp\app\pc\in\gwpc628560372560435 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp&name=wagnac%20%20slide%20coverage%20b&description=20% rule&contentCreator=JOSEY FALCON&mimeType=application/pdf&accountNum=09693720&policyNum=13068616 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] The Upload Service /repo/service/company/upload failed in 0.000000 seconds, null 2024-06-13 09:22:49,103 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-43] Exception from executeScript: 051333149 Failed to execute web script. org.springframework.extensions.webscripts.WebScriptException: 051333149 Failed to execute web script. at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:105) at org.repo.repo.web.scripts.RepositoryContainer.lambda$transactionedExecute$2(RepositoryContainer.java:556) at org.repo.repo.transaction.RetryingTransactionHelper.doInTransaction(RetryingTransactionHelper.java:450) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecute(RepositoryContainer.java:539) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:663) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:699) ... 23 more Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - Error at index 0 in: " r" at java.base/java.net.URLDecoder.decode(URLDecoder.java:232) at java.base/java.net.URLDecoder.decode(URLDecoder.java:142) at com.mysite.core.repo.util.RepositoryUtils.decodeValue(RepositoryUtils.java:465) at com.mysite.core.repo.BaseWebScript.getParameterMap(BaseWebScript.java:138) at com.mysite.core.repo.upload.FileUploadWebScript.executeImpl(FileUploadWebScript.java:37) at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:75) ... 47 more 2024-06-13 09:22:49,124 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-53] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/search Query String: center=cc&docId=a854dbad-af6e-43e3-af73-8ac66365e000   Now there are multiple log entries so we need to first check for the presence of this error "Illegal hex characters in escape (%) pattern". Then looking at the SessionID... in this case - [http-nio-8080-exec-43] but there can be lot of other and may be duplicate SessionID in the log, check the line starting with "Query String" with the same or close timestamp (HH:MM) and create a report like this -   AccountNumnber PolicyNumber Name Location 09693720 13068616 wagnac%20%20slide%20coverage%20b \\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp   As you can see there are two entries in the logfile for the same SessionID http-nio-8080-exec-43 but we want record only for the entry where we got 1. Error "Illegal hex characters in escape" and 2. Entry originated at 2024-06-13 09:22. We can compare _time too as request event and the error event can have difference in time. So, it will be better to search and compare with the timestamp strftime(_time, "%Y-%m-%d %H:%M"). This wau it will compare with Date, Hr, and Min. BTW we might have same error with same SessionID in the log but it has to be different timestamp. So, it is very important to Chek for time also but with the formatted one. I created one Splunk report. Inner and Outer query are able to provide results separately but when I merge and run, although it looking at the required events but not returning any data in the table -   index=myindex "Illegal hex characters in escape (%) pattern" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | eval outer_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table outer_timestamp, sessionID | join type=inner sessionID [ search index=index "Query String" AND "myloc" AND "center=pc" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | rex "accountNum=(?<AccountNum>\d+)" | rex "policyNum=(?<PolicyNum>\d+)" | rex "name=(?<Name>[^&]+)" | rex "description=(?<Description>[^&]+)" | rex "location=(?<Location>[^&]+)" | eval inner_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table sessionID, AccountNum, PolicyNum, Name, Description, Location, inner_timestamp ] | where outer_timestamp = inner_timestamp | table outer_timestamp, sessionID, AccountNum, PolicyNum, Name, Description, Location   What can be the issue? How can I get the desired result? Thanks!
Hi @anil1219 , this seeems to be a json format so you could use the INDEXED_EXTRACTION = JSON in the sourcetype definition in props.conf (https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Pro... See more...
Hi @anil1219 , this seeems to be a json format so you could use the INDEXED_EXTRACTION = JSON in the sourcetype definition in props.conf (https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Propsconf) or the spath command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath). Otherwise, you could use a regex like the following: rex "\"PaymentType\":\"(?<PaymentType>[^\"]+)" the you can test at https://regex101.com/r/VEeiyG/1 Ciao. Giuseppe
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType... See more...
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType as receive only. transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"} transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"send","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}
Alternatively, you could use streamstats to build a list of files to match against: index=wealth OR index=transform-file OR index=ace_message earliest=-30m | rex field=_raw "inputFileName: (?<inputF... See more...
Alternatively, you could use streamstats to build a list of files to match against: index=wealth OR index=transform-file OR index=ace_message earliest=-30m | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=inputFileName "file\_(?<ID>\d+_\d+)\_" | streamstats values(eval(if(now()-_time<1800,ID,NULL))) as IDS | eval alertable=if((now()-_time>1800) AND (ID IN (IDS)),"True","False") | table _time, ID, IDS, alertable
Besides the approximate time (since the times don't match), there is nothing else to relate those two particular logs together? Will your search be used in the general case to output more than one... See more...
Besides the approximate time (since the times don't match), there is nothing else to relate those two particular logs together? Will your search be used in the general case to output more than one row's worth of data? If so, how far apart are the various distinct transactions (or can that be arbitrarily short)?
Hello Team, I need assistance with joining 2 SPL queries to get the desired output. Refer the below log snippet: As per the log pattern there are distinct transaction id's with the ORA-00001 er... See more...
Hello Team, I need assistance with joining 2 SPL queries to get the desired output. Refer the below log snippet: As per the log pattern there are distinct transaction id's with the ORA-00001 error message. Requirement is to identify all such transactions with the error message. Please suggest.   240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: 48493009394940303 (TQ_HOST -> TQ_HOST) 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure. 240614 04:35:52 Algorithm: TS8398 hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription:   I have below 2 queries with their respective output:   Query 1: index=test_index source=/test/instance ("<=== Recv'd TRN:") | rex field=_raw "\<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | table _time transaction_id Output as: _time | transaction_id Query 2: index=test_index source=/test/instance ("ORA-00001") | table _time _raw Output as: _time | _raw   I want to merge or join both the results and get the final output as below: _time | transaction_id | _raw In this case (example) 240614 04:35:50 | 48493009394940303 | ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated Please suggest what modifications to be done in the above query to get this desired result. @ITWhisperer - Kindly help.
Boy am I glad to have found this thread.  Got my problem solved, thank you so much