Hi all, I have set up an indexer cluster to achieve High Availability at ingestion phase. i'm aware about Update peer configuration and i have reviewed intrusction under details tab from SentinelO...
See more...
Hi all, I have set up an indexer cluster to achieve High Availability at ingestion phase. i'm aware about Update peer configuration and i have reviewed intrusction under details tab from SentinelOne App . I can not see an explict mention to a indexer-cluster setup. What are the steps to setup input configuration for a indexer cluster avoiding data duplication? Thanks for your help
I have the same issue.. Is there a way? I tried writing into submitted token model inside "on" call but cannot "get" the token on the higher level. On the console I can see that it was indeed writen ...
See more...
I have the same issue.. Is there a way? I tried writing into submitted token model inside "on" call but cannot "get" the token on the higher level. On the console I can see that it was indeed writen into the token though.
Hi there, Logs sent to SC4S include date, time and host in the event, however when they are sent to Indexer, the date, time and host are missing. How can I get them back so the logs will look exactl...
See more...
Hi there, Logs sent to SC4S include date, time and host in the event, however when they are sent to Indexer, the date, time and host are missing. How can I get them back so the logs will look exactly the same? I would like date, time and host included in the event. I appreciate any hints. thanks and regards, pawelF
How to convert splunk event to stix 2.1 json because i think to connection to a soc center now i use splunk enterprise how can i do ? any app can convert?
Hi. We are seeing weird behaviour on one of our universal forwarders. We have been sending logs from this forwarder for quite a while and this has been working properly the entire time. New logfiles...
See more...
Hi. We are seeing weird behaviour on one of our universal forwarders. We have been sending logs from this forwarder for quite a while and this has been working properly the entire time. New logfiles are created every second hour and log lines are being appended to the newest file. Last night the universal forwarder stopped working normally. When a new file was created the forwarder sent the first line to Splunk. New lines appended later on are not being forwarded. There are no errors logged in the splunkd.log file on the forwarder, nor any error messages on the receiving index servers. Every time a new file is generated, the forwarder sends the first line to Splunk, but the appending lines seem to be ignored. As far as I can see, there has not been any changes on the forwarder, nor on the Splunk servers that might cause this defect. Is there any way to debug the parsing of the logfile on the forwarder to identify the issue? Any other ideas what can be the issue here? Thanks.
Host value in below file gets changed automatically every now and then. Can you help me write a bash script which can check the host value every 5min and if the value is different than the actual hos...
See more...
Host value in below file gets changed automatically every now and then. Can you help me write a bash script which can check the host value every 5min and if the value is different than the actual hostname as in "uname -n". It will automatically correct the host value, save the file and then restart splunk service automatically? cat /opt/splunk/etc/system/local/inputs.conf [default] host=iorper-spf52
I am looking this information to check the history of the modification made to a lookup file. If anyone can help me on this, it will be much appreciated!
Hi! Thanks for taking the time, sadly this didn't work out for me. Ideally if I can keep the same format of: | timechart span=1s count AS TPS | eventstats max(TPS) as peakTPS | eval pea...
See more...
Hi! Thanks for taking the time, sadly this didn't work out for me. Ideally if I can keep the same format of: | timechart span=1s count AS TPS | eventstats max(TPS) as peakTPS | eval peakTime=if(peakTPS==TPS,_time,null()) | stats avg(TPS) as avgTPS first(peakTPS) as peakTPS first(peakTime) as peakTime | fieldformat peakTime=strftime(peakTime,"%x %X") With the addition of a couple lines for Min TPS and when it took place that would be ideal.
Hello @WanLohnston you can try something like this : | timechart span=1d count(myfield) as nb_myfield | eventstats min(myfield) as min_fields max(myfield) as max_fields avg(myfield) as moy_fiel...
See more...
Hello @WanLohnston you can try something like this : | timechart span=1d count(myfield) as nb_myfield | eventstats min(myfield) as min_fields max(myfield) as max_fields avg(myfield) as moy_fields
Hi at all, I have to parse Juniper Switch logs that are very similar to Cisco ios. In the Juniper Add-On there isn't anythig for parse these logs so I have to create a new Add-On. is there anythig...
See more...
Hi at all, I have to parse Juniper Switch logs that are very similar to Cisco ios. In the Juniper Add-On there isn't anythig for parse these logs so I have to create a new Add-On. is there anythig that already did it and can give me some hint to avoid to create hot water? Ciao. Giuseppe
Each series has a single colour. If you want each column to be a different colour, you need to rearrange your data so that the values are in different fields (columns in a table).