All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-pr... See more...
Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-prem deployment server rather than Splunk Cloud. Can anyone help advise what could be the reason for the same and how to resolve?  Error:  Failed to load current state for selected entity in form! Details Error: Request failed with status code 500 Addon: https://splunkbase.splunk.com/app/5513   Thanks
It doesn't seem to matter. The macro expansion can be as simple as a single word that it's replacing and the problem still happens.
This is exactly what I speculated in your previous question: that your developers have left a compliant JSON, while having some structure within DATA field.  Instead of rex individual elements as if ... See more...
This is exactly what I speculated in your previous question: that your developers have left a compliant JSON, while having some structure within DATA field.  Instead of rex individual elements as if DATA is made of random text, you should utilize the structure your developers intended.  Have you tried my suggestion yesterday?   index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") | rex field=DATA mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | rename DATA AS _raw | kv |search ACTION= start OR ACTION=done NOT SERVICE="null" |eval split=SERVICE.":".ACTION |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") | table _time *START *DONE   (Since you are running timechart, there is no need to preserver _raw, so I omitted that.  I also don't see how your last table command could give you the result you illustrated because START and DONE are capitalized.) Your sample data (only one event) gives _time AAP:START 01/02/2022 1 11/04/2024 0 This is the data emulation including _time conversion   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", \"DATA\": \"[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" | spath | eval _time = strptime(date, "%d/%m/%Y %H:%M:%S,%f") ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ```   Play with it and compare to real data.  If this doesn't work for select events, you need to post samples of those events.  
Hi! Thanks for checking. So... I did more digging on my side. On a non-clustered search head, I've got no delay. On my clustered-search heads, I do. I have two SH clusters and both are impacted. Splu... See more...
Hi! Thanks for checking. So... I did more digging on my side. On a non-clustered search head, I've got no delay. On my clustered-search heads, I do. I have two SH clusters and both are impacted. Splunk version is 9.1.1.
Anyway, obsessing about EPS suggests that you might be thinking about replacing some other SIEM/log management solution. Those used to be licensed on a per-EPS basis. With Splunk it doesn't matter. I... See more...
Anyway, obsessing about EPS suggests that you might be thinking about replacing some other SIEM/log management solution. Those used to be licensed on a per-EPS basis. With Splunk it doesn't matter. If ingest-based your license allows for indexing specified volume of data _daily_ regardless of whether it's a constant steady data stream or if it's just a few "batches" of high volume peaks of data.
Ok, to be honest I had to check more on the config to properly clone and forward my datas, the behaviour of the conf it's strange. But thanks a lot for your help, I appreciate !
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few h... See more...
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few hours reading for similar problems here but none helped me, I found an app called decode2 but it was not able to help me either, it wants me to feed it a table to decode and I only have 2 tables, one called time and one called event, nothing else, pointing it to event returns nothing. bellow I'm posting 2 of the events as sample ```\hex string starts here\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x005\xE6\x00ppt/tags/tag6.\x00\x00\x00\x00]\x00]\x00\xA9\x00\x00N\xE7\x00\x00\x00   \hex start\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xE5\x00ppt/tags/tag3.-\x00\x00\x00\x00\x00\x00!\x00\xA1   i chanced the first part of the string because it did not let me post, i also deleted the part between tag6. and the next slash, same goes for tag3.-   is there a way to automatically convert all events from hex to text?
Treating structured data as pure text is doomed to be unstable.  Have you tried my suggestion of reconstructing events based on inherent structure?
If you are willing to accept some acrobat, ipmask can be used even with variable net masks.   | map search="| makeresults |fields - _* | eval Network = ipmask(\"$Mask$\", $IP$), IP = $IP$, Mask =... See more...
If you are willing to accept some acrobat, ipmask can be used even with variable net masks.   | map search="| makeresults |fields - _* | eval Network = ipmask(\"$Mask$\", $IP$), IP = $IP$, Mask = $Mask$"   Emulated data below should give IP Mask Network 192.168.1.10 255.255.255.0 192.168.1.0 10.54.3.8 255.255.246.0 10.54.2.0 Here is the emulation for you to play with and compare with real data   | makeresults format=csv data="IP, Mask 192.168.1.10, 255.255.255.0 10.54.3.8, 255.255.246.0"   But again, to say 192.168.1.0 is a network address is (very) classism.  The CIDR expressions should be IP Mask Network 192.168.1.10 255.255.255.0 192.168.1.0/24 10.54.3.8 255.255.248.0 10.54.0.0/21 N'est-ce pas?  This can be obtained with a bit of bit math, like this:   | map search="| makeresults |fields - _* | eval Mask = split($Mask$, \".\"), Mask = 32 - sum(mvmap(Mask, log(256 - Mask,2))), Network = ipmask(\"$Mask$\", $IP$) . \"/\" . Mask, IP = $IP$, Mask = $Mask$"    
I find it interesting that you give the log file size in GB rather than events yet you expect UF documentation to provide EPS. @PickleRickhas explained why we cannot offer an EPS number and also why... See more...
I find it interesting that you give the log file size in GB rather than events yet you expect UF documentation to provide EPS. @PickleRickhas explained why we cannot offer an EPS number and also why any talk about data rates is a guess at best. A Splunk UF is very capable of handling 100GBs of log files.  Many customers do so regularly. Why problem are you trying to solve?
OK. Have you read anything that has been written in this thread? EPS as such is not a very important concept for Splunk (at least not on the UF level).
Same speed here. What is your environment like?
Hi, yep I understand that but I don't understand what the error is telling me: Error while checking the script: Can't parse /opt/splunk/etc/apps/TA-LoRaWAN_decoders/bin/br_uncompress.py: ParseError... See more...
Hi, yep I understand that but I don't understand what the error is telling me: Error while checking the script: Can't parse /opt/splunk/etc/apps/TA-LoRaWAN_decoders/bin/br_uncompress.py: ParseError: bad input: type=1, value='print', context=(' ', (25, 8))   I think it's referring to line 25 which is:   def print(self, message, end="\n"):    
We want to read log files (approx. 100 of GBs) and send them through Splunk forwarder before setting up, We need to verify the Events-per-second (EPS) recorded in a flat file with Universal Forwarder.
Depending on how you set your authorization, you might end up with different permissions in the roles in each SH (e.g access to indexes). Check you roles to see what are the allowed indexes in each S... See more...
Depending on how you set your authorization, you might end up with different permissions in the roles in each SH (e.g access to indexes). Check you roles to see what are the allowed indexes in each SH for the roles you have.
Since September 2021, Splunk does not include python 2. so you need to update your code if its not compatible with python 3. https://www.splunk.com/en_us/blog/platform/removing-python-2-from-new-spl... See more...
Since September 2021, Splunk does not include python 2. so you need to update your code if its not compatible with python 3. https://www.splunk.com/en_us/blog/platform/removing-python-2-from-new-splunk-cloud-and-splunk-enterprise-releases-starting-september-2021.html 
@stevenbo I am curious why you need to do this tbh.  You may also find that your current setup will be unsupported after your changes. Always best to get some top cover from Splunk Support, especi... See more...
@stevenbo I am curious why you need to do this tbh.  You may also find that your current setup will be unsupported after your changes. Always best to get some top cover from Splunk Support, especially if it's going to be a production system. 
Hi, no, I still don't know what the message means!
What is your business problem?