All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Marcie.Sirbaugh, I see you have an open ticket with the same error you asked Sajo about  agentregistrationmodule.go:352 Perhaps you can continue to share any outcomes from that interactio... See more...
Hi @Marcie.Sirbaugh, I see you have an open ticket with the same error you asked Sajo about  agentregistrationmodule.go:352 Perhaps you can continue to share any outcomes from that interaction with your ticket here with Sajo.
Hello @493600, There is no OOTB of achieving this. Usually, we have to download the events in _raw format, upload it on a test environment which has latest version of TA along with CIM Validator inst... See more...
Hello @493600, There is no OOTB of achieving this. Usually, we have to download the events in _raw format, upload it on a test environment which has latest version of TA along with CIM Validator installed - and validate the field extraction. Commands like fieldsummary can help in comparing the field name and values - https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Fieldsummary   Please accept the solution and hit Karma, if this helps!
Seeing some errors in the internal logs for lookup files. Can someone help me with the reason for these errors? 1) Unable to find filename property for lookup=xyz.csv will attempt to use implicit fi... See more...
Seeing some errors in the internal logs for lookup files. Can someone help me with the reason for these errors? 1) Unable to find filename property for lookup=xyz.csv will attempt to use implicit filename. 2) No valid lookup table file found for this lookup=* 3) The lookup table '*' does not exist or is not available. - This can be due to the definition or reference of the lookup file is there but the file has been deleted.
Looking for a solution that does certain validations check when we upgrade any splunk addon to latest version. This is to make sure when the addon is upgraded to latest version it does not break any... See more...
Looking for a solution that does certain validations check when we upgrade any splunk addon to latest version. This is to make sure when the addon is upgraded to latest version it does not break any of the existing working configs like field parsing, search execution time, etc. in prod. So we need to check if its possible to create a dashboard or something where in we can compare the old state vs upgraded state of the addon before we deploy to prod. Basic two validations can be CIM fields & search execution time and to kick off this we can pick any one sourcetype.
I left the Frozen drive to point to $SPLUNK_DB on the indexer's drive, but I'm not trying to employ frozen buckets at all. I'm trying to use the volumes on external drives for hot and cold, that's h... See more...
I left the Frozen drive to point to $SPLUNK_DB on the indexer's drive, but I'm not trying to employ frozen buckets at all. I'm trying to use the volumes on external drives for hot and cold, that's how our current instance is set up. The difference being the current is on Windows, and this new one is going to be on RHEL8.
Thanks @richgalloway!!
@yuanliu , yes i tried the below query, but i getting 0 results index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") | rex field=D... See more...
@yuanliu , yes i tried the below query, but i getting 0 results index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") | rex field=DATA mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | rename DATA AS _raw | kv |search ACTION= start OR ACTION=done NOT SERVICE="null" |eval split=SERVICE.":".ACTION |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") | table _time *START *DONE
Splunk is not going to be able to process that binary PowerPoint file without some pre-processing (manual or via a script).
these are all static, I was given a .pptx file and asked to find certain events, that wouldn't be a problem if the data was in plaintext, the instance is running on my machine so no TSL is involved, ... See more...
these are all static, I was given a .pptx file and asked to find certain events, that wouldn't be a problem if the data was in plaintext, the instance is running on my machine so no TSL is involved, correct me if I'm wrong, is there anything I can do to actually use this data? without having to decode to binary and then ascii manually
Happy to take a look if you can share your agent_config.yaml file.
You appear to have a hex dump of binary data.  Decoding the hex will give you the original binary, but Splunk doesn't support binary data. I've seen similar-looking input when an encrypted input str... See more...
You appear to have a hex dump of binary data.  Decoding the hex will give you the original binary, but Splunk doesn't support binary data. I've seen similar-looking input when an encrypted input stream is not decrypted before being indexed.  Double-check the TLS/SSL settings.
I just checked on a Splunk Cloud SHC and saw to difference in expansion time so I suspect there's something happening in your environment. Do you see any relevant messages in splunkd.log on the SH?
I am in fact using the Splunk Otel Collector and have validated the indentation.
Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-pr... See more...
Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-prem deployment server rather than Splunk Cloud. Can anyone help advise what could be the reason for the same and how to resolve?  Error:  Failed to load current state for selected entity in form! Details Error: Request failed with status code 500 Addon: https://splunkbase.splunk.com/app/5513   Thanks
It doesn't seem to matter. The macro expansion can be as simple as a single word that it's replacing and the problem still happens.
This is exactly what I speculated in your previous question: that your developers have left a compliant JSON, while having some structure within DATA field.  Instead of rex individual elements as if ... See more...
This is exactly what I speculated in your previous question: that your developers have left a compliant JSON, while having some structure within DATA field.  Instead of rex individual elements as if DATA is made of random text, you should utilize the structure your developers intended.  Have you tried my suggestion yesterday?   index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") | rex field=DATA mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | rename DATA AS _raw | kv |search ACTION= start OR ACTION=done NOT SERVICE="null" |eval split=SERVICE.":".ACTION |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") | table _time *START *DONE   (Since you are running timechart, there is no need to preserver _raw, so I omitted that.  I also don't see how your last table command could give you the result you illustrated because START and DONE are capitalized.) Your sample data (only one event) gives _time AAP:START 01/02/2022 1 11/04/2024 0 This is the data emulation including _time conversion   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", \"DATA\": \"[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" | spath | eval _time = strptime(date, "%d/%m/%Y %H:%M:%S,%f") ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ```   Play with it and compare to real data.  If this doesn't work for select events, you need to post samples of those events.  
Hi! Thanks for checking. So... I did more digging on my side. On a non-clustered search head, I've got no delay. On my clustered-search heads, I do. I have two SH clusters and both are impacted. Splu... See more...
Hi! Thanks for checking. So... I did more digging on my side. On a non-clustered search head, I've got no delay. On my clustered-search heads, I do. I have two SH clusters and both are impacted. Splunk version is 9.1.1.
Anyway, obsessing about EPS suggests that you might be thinking about replacing some other SIEM/log management solution. Those used to be licensed on a per-EPS basis. With Splunk it doesn't matter. I... See more...
Anyway, obsessing about EPS suggests that you might be thinking about replacing some other SIEM/log management solution. Those used to be licensed on a per-EPS basis. With Splunk it doesn't matter. If ingest-based your license allows for indexing specified volume of data _daily_ regardless of whether it's a constant steady data stream or if it's just a few "batches" of high volume peaks of data.
Ok, to be honest I had to check more on the config to properly clone and forward my datas, the behaviour of the conf it's strange. But thanks a lot for your help, I appreciate !
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few h... See more...
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few hours reading for similar problems here but none helped me, I found an app called decode2 but it was not able to help me either, it wants me to feed it a table to decode and I only have 2 tables, one called time and one called event, nothing else, pointing it to event returns nothing. bellow I'm posting 2 of the events as sample ```\hex string starts here\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x005\xE6\x00ppt/tags/tag6.\x00\x00\x00\x00]\x00]\x00\xA9\x00\x00N\xE7\x00\x00\x00   \hex start\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xE5\x00ppt/tags/tag3.-\x00\x00\x00\x00\x00\x00!\x00\xA1   i chanced the first part of the string because it did not let me post, i also deleted the part between tag6. and the next slash, same goes for tag3.-   is there a way to automatically convert all events from hex to text?