All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok, to be honest I had to check more on the config to properly clone and forward my datas, the behaviour of the conf it's strange. But thanks a lot for your help, I appreciate !
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few h... See more...
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few hours reading for similar problems here but none helped me, I found an app called decode2 but it was not able to help me either, it wants me to feed it a table to decode and I only have 2 tables, one called time and one called event, nothing else, pointing it to event returns nothing. bellow I'm posting 2 of the events as sample ```\hex string starts here\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x005\xE6\x00ppt/tags/tag6.\x00\x00\x00\x00]\x00]\x00\xA9\x00\x00N\xE7\x00\x00\x00   \hex start\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xE5\x00ppt/tags/tag3.-\x00\x00\x00\x00\x00\x00!\x00\xA1   i chanced the first part of the string because it did not let me post, i also deleted the part between tag6. and the next slash, same goes for tag3.-   is there a way to automatically convert all events from hex to text?
Treating structured data as pure text is doomed to be unstable.  Have you tried my suggestion of reconstructing events based on inherent structure?
If you are willing to accept some acrobat, ipmask can be used even with variable net masks.   | map search="| makeresults |fields - _* | eval Network = ipmask(\"$Mask$\", $IP$), IP = $IP$, Mask =... See more...
If you are willing to accept some acrobat, ipmask can be used even with variable net masks.   | map search="| makeresults |fields - _* | eval Network = ipmask(\"$Mask$\", $IP$), IP = $IP$, Mask = $Mask$"   Emulated data below should give IP Mask Network 192.168.1.10 255.255.255.0 192.168.1.0 10.54.3.8 255.255.246.0 10.54.2.0 Here is the emulation for you to play with and compare with real data   | makeresults format=csv data="IP, Mask 192.168.1.10, 255.255.255.0 10.54.3.8, 255.255.246.0"   But again, to say 192.168.1.0 is a network address is (very) classism.  The CIDR expressions should be IP Mask Network 192.168.1.10 255.255.255.0 192.168.1.0/24 10.54.3.8 255.255.248.0 10.54.0.0/21 N'est-ce pas?  This can be obtained with a bit of bit math, like this:   | map search="| makeresults |fields - _* | eval Mask = split($Mask$, \".\"), Mask = 32 - sum(mvmap(Mask, log(256 - Mask,2))), Network = ipmask(\"$Mask$\", $IP$) . \"/\" . Mask, IP = $IP$, Mask = $Mask$"    
I find it interesting that you give the log file size in GB rather than events yet you expect UF documentation to provide EPS. @PickleRickhas explained why we cannot offer an EPS number and also why... See more...
I find it interesting that you give the log file size in GB rather than events yet you expect UF documentation to provide EPS. @PickleRickhas explained why we cannot offer an EPS number and also why any talk about data rates is a guess at best. A Splunk UF is very capable of handling 100GBs of log files.  Many customers do so regularly. Why problem are you trying to solve?
OK. Have you read anything that has been written in this thread? EPS as such is not a very important concept for Splunk (at least not on the UF level).
Same speed here. What is your environment like?
Hi, yep I understand that but I don't understand what the error is telling me: Error while checking the script: Can't parse /opt/splunk/etc/apps/TA-LoRaWAN_decoders/bin/br_uncompress.py: ParseError... See more...
Hi, yep I understand that but I don't understand what the error is telling me: Error while checking the script: Can't parse /opt/splunk/etc/apps/TA-LoRaWAN_decoders/bin/br_uncompress.py: ParseError: bad input: type=1, value='print', context=(' ', (25, 8))   I think it's referring to line 25 which is:   def print(self, message, end="\n"):    
We want to read log files (approx. 100 of GBs) and send them through Splunk forwarder before setting up, We need to verify the Events-per-second (EPS) recorded in a flat file with Universal Forwarder.
Depending on how you set your authorization, you might end up with different permissions in the roles in each SH (e.g access to indexes). Check you roles to see what are the allowed indexes in each S... See more...
Depending on how you set your authorization, you might end up with different permissions in the roles in each SH (e.g access to indexes). Check you roles to see what are the allowed indexes in each SH for the roles you have.
Since September 2021, Splunk does not include python 2. so you need to update your code if its not compatible with python 3. https://www.splunk.com/en_us/blog/platform/removing-python-2-from-new-spl... See more...
Since September 2021, Splunk does not include python 2. so you need to update your code if its not compatible with python 3. https://www.splunk.com/en_us/blog/platform/removing-python-2-from-new-splunk-cloud-and-splunk-enterprise-releases-starting-september-2021.html 
@stevenbo I am curious why you need to do this tbh.  You may also find that your current setup will be unsupported after your changes. Always best to get some top cover from Splunk Support, especi... See more...
@stevenbo I am curious why you need to do this tbh.  You may also find that your current setup will be unsupported after your changes. Always best to get some top cover from Splunk Support, especially if it's going to be a production system. 
Hi, no, I still don't know what the message means!
What is your business problem?
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a parti... See more...
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a particular index, If we perform a search on ES SH, we cannot see data. I mean, even if we perform the simplest query possible, which is: index=<index_name>   we go no result. Perhaps, if I try the same search on Core SH, data are shown. The behavior in my mind is very strange because it happened only with this specific index; all other remaining indexes return the same identical data, both  performing query on ES SH and Core SH. So in a nuthshell we can say: Index that return result on SH Core: N Index tha return result on ES Core: N - 1  
Here the raw:  
Good morning, Thank you for the feedback. Unfortunately the netmask is not fixed... I'll try with the app https://splunkbase.splunk.com/app/6595   
I would appreciate it if there were any documents on Events-per-second (EPS) recorded in a flat file with universal forwarder.
My first hint whenever "something strange" happens seemingly at OS level would be of course to check SELinux.