All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As @gcusello points out, the data you illustrated is suspiciously close to JSON.  Are you sure that your data is not like this instead?   {"transaction": {"version":1,"status":"approved","identifie... See more...
As @gcusello points out, the data you illustrated is suspiciously close to JSON.  Are you sure that your data is not like this instead?   {"transaction": {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"} }   Or is it possible that you are simply illustrating an extracted field named transaction whose values are   {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}   and   {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"send","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}   If not, your developers are really doing a deservice to everyone downstream, not just Splunkers.  But if raw data  is indeed as you originally posted, you can first extract the valid JSON into a field, let's call it transaction, then extract key-value pairs from this object.   | rex "transaction: *(?<transaction>{.+)" | fromjson transaction   This is what you should get PaymentType amount currencyCode fees identifier recipientHandle senderHandle status statusChangedTimestamp timestamp transactionAmount transferIdentifier transferMode type version receive [REDACTED] USD   0c4240e0-2c2c-6427-fb1f-71131029cd89 [REDACTED] [REDACTED] approved 2024-06-13T04:29:56.337+0000 2024-06-13T04:29:20.673+0000 [REDACTED] cded3395-38f9-4258-90a5-9269abfa5536 contact payment 1 send [REDACTED] USD   0c4240e0-2c2c-6427-fb1f-71131029cd89 [REDACTED] [REDACTED] approved 2024-06-13T04:29:56.337+0000 2024-06-13T04:29:20.673+0000 [REDACTED] cded3395-38f9-4258-90a5-9269abfa5536 contact payment 1 Here is an emulation you can play with and compare with real data   | makeresults | eval data = mvappend("transaction: {\"version\":1,\"status\":\"approved\",\"identifier\":\"0c4240e0-2c2c-6427-fb1f-71131029cd89\",\"amount\":\"[REDACTED]\",\"transactionAmount\":\"[REDACTED]\",\"timestamp\":\"2024-06-13T04:29:20.673+0000\",\"statusChangedTimestamp\":\"2024-06-13T04:29:56.337+0000\",\"type\":\"payment\",\"transferIdentifier\":\"cded3395-38f9-4258-90a5-9269abfa5536\",\"currencyCode\":\"USD\",\"PaymentType\":\"receive\",\"senderHandle\":\"[REDACTED]\",\"recipientHandle\":\"[REDACTED]\",\"fees\":[],\"transferMode\":\"contact\"}", "transaction: {\"version\":1,\"status\":\"approved\",\"identifier\":\"0c4240e0-2c2c-6427-fb1f-71131029cd89\",\"amount\":\"[REDACTED]\",\"transactionAmount\":\"[REDACTED]\",\"timestamp\":\"2024-06-13T04:29:20.673+0000\",\"statusChangedTimestamp\":\"2024-06-13T04:29:56.337+0000\",\"type\":\"payment\",\"transferIdentifier\":\"cded3395-38f9-4258-90a5-9269abfa5536\",\"currencyCode\":\"USD\",\"PaymentType\":\"send\",\"senderHandle\":\"[REDACTED]\",\"recipientHandle\":\"[REDACTED]\",\"fees\":[],\"transferMode\":\"contact\"}") | mvexpand data | rename data AS _raw ``` data emulation above ```  
Hi @Abass42 , the answer is easy: add more disk space to both your servers. On Indexers you must have the disk space for data, but also the disk space for the bundle replication. On SHs you must h... See more...
Hi @Abass42 , the answer is easy: add more disk space to both your servers. On Indexers you must have the disk space for data, but also the disk space for the bundle replication. On SHs you must have the disk space for apps and for dispatches. How muche disk space did you allocated on both the servers? If this is a lab you could save space applying a limited retention on data. Ciao. Giuseppe
Hi @rdhdr , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Ah. I suspect this is more about the rex expression than the table. You could try a join: index=myindex TTY | rex field=_raw "Job Id: (?<jobId>.*?)\." | join left=L right=R where L.jobId=R.jobId... See more...
Ah. I suspect this is more about the rex expression than the table. You could try a join: index=myindex TTY | rex field=_raw "Job Id: (?<jobId>.*?)\." | join left=L right=R where L.jobId=R.jobId [search index=myindex cs2k_transaction_id_in_error="CHG063339403031900" major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | table jobId ]  
Thank you for your comment, I posted sample data in the original post and I will try your offer
So, there is two ways to do this CAC authentication.   SAML or LDAP trusted methods.  Before, I thought PKI was just one option but, SAML open up another option. I hope this helps:  Configure single... See more...
So, there is two ways to do this CAC authentication.   SAML or LDAP trusted methods.  Before, I thought PKI was just one option but, SAML open up another option. I hope this helps:  Configure single sign-on with SAML - Splunk Documentation
Thanks for the reply, yes, I have tried that already. It does not work. The response (jobId) is in a table so that wont allow this subsearch.
Would it be possible to post some sample data. It's a bit too easy to get lost in what is supposed to be an escape character versus a character in your data. Please replace any real phone numbers wit... See more...
Would it be possible to post some sample data. It's a bit too easy to get lost in what is supposed to be an escape character versus a character in your data. Please replace any real phone numbers with dummy values.  Escaping backslashes for regex expressions is always fun, but I suspect that's where your issues are coming from. Escaping a backslash in a regex from the search box requires four backslashes as there are two layers of escaping that are happening.  I try to construct regexs to avoid that: | makeresults | eval phone_data="[{\"PhoneNumber\":\"123-456-7890\"}]" | append [ | makeresults | eval phone_data="[{\\\"PhoneNumber\\\":\\\"111-111-1111\\\"}]" ] | rex field=phone_data "PhoneNumber[^\d]+(?<my_PhoneNumber>[0-9-\(\)]+)" but if I'm making an incorrect assumption about the characters in aphone number, you can try | rex field=phone_data "PhoneNumber[^\d]+(?<my_PhoneNumber>[^\\\\\"]+)"  
Have you tried a subsearch? index=myindex "TTY" [ search index=myindex cs2k_transaction_id_in_error="CHG063339403031900 major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | ... See more...
Have you tried a subsearch? index=myindex "TTY" [ search index=myindex cs2k_transaction_id_in_error="CHG063339403031900 major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | table jobId ]
First Splunk query gives me a value in a table. The value is a jobId. I want to use this jobId in another search query like a second one. Can we join them in Splunk way? index=myindex cs2k_transact... See more...
First Splunk query gives me a value in a table. The value is a jobId. I want to use this jobId in another search query like a second one. Can we join them in Splunk way? index=myindex cs2k_transaction_id_in_error="CHG063339403031900 major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | table jobId index=myindex "TTY"  "jobId"
I don't know this area well, but the error suggests an issue with "container", and not "update". Within your custom function you are using container, but it would seem it's not defined. How are you p... See more...
I don't know this area well, but the error suggests an issue with "container", and not "update". Within your custom function you are using container, but it would seem it's not defined. How are you passing "container" into your function? 
Hello, I have a case where I need to do regex  and I built my regex using regex101, everything works great and catchs everything there But I encountred an issue where splunk won't accept optional gr... See more...
Hello, I have a case where I need to do regex  and I built my regex using regex101, everything works great and catchs everything there But I encountred an issue where splunk won't accept optional groups "(\\\")?", it'll give the error of unmatched closing parenthesis until you add another closing bracket like so: "(\\\"))?" And another issue I encountred is after I add this closing bracket, the regex will work, but not consistently. Here's what I mean: That's a part of my regex:     \[\{(\\)?\"PhoneNumber(\\)?\":(\\)?\"(?<my_PhoneNumber>[^\\\"]+     Won't work until I add more brackets to the optional groups like I mentioned before:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+       second issue: adding another part will still work:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+)\S+OtherPhoneNumber(\\))?\":(\\))?(\"))?(?<myother_PhoneNumber>[^,\\\"]+|null)       Adding a third part with the exact same format as the second part won't, will give the error of unmatched closing parenthesis again:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+)\S+OtherPhoneNumber(\\))?\":(\\))?(\"))?(?<myother_PhoneNumber>[^,\\\"]+|null)\S+Email(\\))?\":(\\))?(\"))?(?<email>[^,\\\"]+|null)       Am I missing something? I know the regex itself works   Sample data of the original log:   [{"PhoneNumber":"+1 450555338","AlternativePhoneNumber":null,"Email":null,"VoiceOnlyPhoneNumber":null}]   [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNumber\":\"+1 455255697\",\"Email\":\"Dam@test.com.us\",\"VoiceOnlyPhoneNumber\":null}]"}   [{\"PhoneNumber\":\"+1 459551561\",\"AlternativePhoneNumber\":\"+1 6155555533\",\"Email\":null,\"VoiceOnlyPhoneNumber\":\"+1 455556868\"}]
I created a Python script that successfully links episodes with my 3rd party ticketing system. I'm trying to populate that ticket system with some of the "common field" values associated with a given... See more...
I created a Python script that successfully links episodes with my 3rd party ticketing system. I'm trying to populate that ticket system with some of the "common field" values associated with a given episode but I don't see a good way to do that?  Anyone have any hints on how to accomplish this? I'm probably missing something very obvious in the documentation.   thx!
I have 2 different splunk apps, one is a TA and the other is an app.  TA : uses modular input to connect with a data source. There are some logs and metadata that are pulled from the data source. Lo... See more...
I have 2 different splunk apps, one is a TA and the other is an app.  TA : uses modular input to connect with a data source. There are some logs and metadata that are pulled from the data source. Logs are pulled via syslog by providing a tcp input and metadata via api key and secret. The metadata is stored in kv stores.  App: is supposed to be installed on search heads and they support dashboards/reports that make use of the logs and metadata sent by HF. For splunk enterprise, the above approach works when HF has the context of search heads, because HF takes care of uploading the kv stores to the search heads via scheduled search. This ensures that the app residing on SH has the data to work with. However, on splunk cloud, once TA is installed , how to ensure that SH nodes have metadata to work with? Can we find out what are the search head fqdns so that kv stores can be copied there via scheduled search?
Can you post some dataset as well as test time that you think should yield results but did not? (To eliminate the complexity of the test, you can compare with a fixed epoch time instead of now().)  I... See more...
Can you post some dataset as well as test time that you think should yield results but did not? (To eliminate the complexity of the test, you can compare with a fixed epoch time instead of now().)  I ran the following and your where command gives 2 to 3 outputs depending on when in the calendar minute the emulation runs.   | makeresults count=10 | streamstats count as offset | eval _time = relative_time(_time, "-" . offset . "min"), eventStartsFrom = relative_time(_time, "+" . (10 - offset) . "min"), eventEndsAt = relative_time(eventStartsFrom, "+5min") | eval _time = now() ``` data emulation abvove ``` | fieldformat eventStartsFrom = strftime(eventStartsFrom, "%F %T") | fieldformat eventEndsAt = strftime(eventEndsAt, "%F %T") | where eventStartsFrom <= now() and eventEndsAt >= now()   One sample output is _time eventEndsAt eventStartFrom offset 2024-06-14 13:49:36 2024-06-14 13:54:36 2024-06-14 13:49:36 5 2024-06-14 13:49:36 2024-06-14 13:52:36 2024-06-14 13:47:36 6 2024-06-14 13:49:36 2024-06-14 13:50:36 2024-06-14 13:45:36 7 another output is _time eventEndsAt eventStartFrom offset 2024-06-14 13:53:11 2024-06-14 13:56:12 2024-06-14 13:51:12 6 2024-06-14 13:53:11 2024-06-14 13:54:12 2024-06-14 13:49:12 7 The final output uses _time field to display now().
I have this question as a reference: Splunk Question  I have one indexer, SH, and one forwarder. At some point, I had sent data from the Forwarder to the indexer and it was searchable from the SH. A... See more...
I have this question as a reference: Splunk Question  I have one indexer, SH, and one forwarder. At some point, I had sent data from the Forwarder to the indexer and it was searchable from the SH. After a few runs, I had received this error:      Search not executed: The minimum free disk space (5000MB) reached for /export/opt/splunk/var/run/splunk/dispatch. user=admin., concurrency_category="historical", concurrency_context="user_instance-wide", current_concurrency=0, concurrency_limit=5000     This was because our root filesystem was only 20 gigs and after a few searches it had dropped below 5 gigs. So at this point i was wanting to remount and move around a few things and move splunk into a larger FS. So i did, and since then, I have fully removed and reinstalled splunk and remounted it multiple times on each server, and some of these issues persist. Currently, The search Head isnt wanting to connect and search the indexer. Storage and server resources are fine, and i can connect through telnet over port 8089, but the replication keeps failing.  I keep receiving error:     Bundle replication to peer named dc1nix2dxxx at uri https://dc1nix2dxxx:8089 was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_DATA_TRANSMIT_FAILURE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. Expected common latest bundle version on all peers after sync replication, found none. Reverting to old behavior - using most recent bundles on all     I can connect and do everything else on the server through the backend, and i can telnet between the two, so im not sure what to do. Most everything i keep seeing has me check settings under distributed environment. And most of the time, under those setting, it says due to our license we arent allowed to access those settings. All i have set is one search peer, dc1nix2pxxx:8089.   Some sources say its an issue with the web.conf settings, but i dont have a web.conf under my local, and if i did, what should that look like? I just have three servers im working with.  Id appreciate any help or guidance in the right direction.    Thank you
Just tried making AND in Upper case, but didnt wrk 
I have been working on our Splunk Dev environment, and since then, I have reinstalled and uninstalled Splunk many times. I had a question as to why even on a fresh install, the apps, and a few other ... See more...
I have been working on our Splunk Dev environment, and since then, I have reinstalled and uninstalled Splunk many times. I had a question as to why even on a fresh install, the apps, and a few other artifacts remain? Once i wipe all traces of splunk off a server, I would think that upon reinstall, it would be a fresh start. yet, some of the GUI settings remain, and even some apps on the specific servers remain.  I have one dev indexer, SH, and Forwarder. We have specific apps that i have installed for people months ago, and since then, have rm -rf all traces that I could find of splunk, and yet, upon reinstall of splunk, I still see those apps under /SPLUNK_HOME/etc/apps. I have the same tar that i am unzipping on each server. yet, things like that persist across the servers.    My question is, what is storing that info? For example, the app BeyondTrust-PMCloud-Integration/, located under /export/opt/splunk/etc/apps, persists throughout two or three reinstalls of splunk. Is the FS storing data about the Splunk install even after i rm -rf all of /export/opt/splunk?  Im trying to fix some annoying issues for replication and such by just resetting the servers, since i am building them from ground up, but these servers are still retaining some stuff. I decided to redo Splunk dev after we kept having issues with the old Dev environment. I was wanting a completely fresh start, but it seems as if Splunk retains some things even after a full reset. So im not sure if some problems are still persisting because something from a previous install is still floating around somewhere. Thanks for any help
Both are set in the events as a field
This requirement was solved with the following syntax: index = indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval diff=now()-_time | eval type=case(EVENT=="START","START",EVENT=... See more...
This requirement was solved with the following syntax: index = indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval diff=now()-_time | eval type=case(EVENT=="START","START",EVENT="END","END") | eventstats dc(type) as dc_type by UID | search dc_type=1 AND (type=START AND diff>300)