All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I just checking that we are using 2 x c5.large for IHF and also HEC and there is TA-AWS and TA-gcp running too. Daily ingesting is something like 150GB. You should remember that if this is your 1st ... See more...
I just checking that we are using 2 x c5.large for IHF and also HEC and there is TA-AWS and TA-gcp running too. Daily ingesting is something like 150GB. You should remember that if this is your 1st full splunk instance after UF then you must add those props and transforms there not in then indexers to take those into use!
thanks for the info. when saying your existing field you mean to put the actual field that contain the format? also is there a way to save that so i could do a stats to show the output only with the... See more...
thanks for the info. when saying your existing field you mean to put the actual field that contain the format? also is there a way to save that so i could do a stats to show the output only with the cn value?
Thanks for that.  We are currently running c5n.9xl hosts (which are enormous).  Using those certainly made a difference, but looking at their logs in AWS Console they are clearly under utilised. I g... See more...
Thanks for that.  We are currently running c5n.9xl hosts (which are enormous).  Using those certainly made a difference, but looking at their logs in AWS Console they are clearly under utilised. I guess we are going to have to start to experiment with using fleets of more, but smaller, hosts to see how things go.  It's a pity Splunk don't have a recommended machine size if literally all you are doing is forwarding - we need to run the http collector and the AWS Add-on to pull some S3 info, but even they are basically just acquiring and forwarding.  No explicit indexing, no props and transforms, etc...
Hi you could use this ... | rex field=<your existing field> "cn=(?<cn>[^,]+)" r. Ismo PS. regex101.com is excellent place to test these! 
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?... See more...
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?   thanks!!
Try something like this | eval range=coalesce(range, id)
I see your logic, my bad. I'm trying to identify the start of the sequence as well even thought there is no increment based on the previous row.
What were you expecting for the first id if there is no previous row?
Hello,  Thank you for the answer. Indeed trying a range with a windows of 2 spawns results. However, I'm not picking up on the first start of the sequence (ID 0 and ID 1)  but only the last 4 IDs ( ... See more...
Hello,  Thank you for the answer. Indeed trying a range with a windows of 2 spawns results. However, I'm not picking up on the first start of the sequence (ID 0 and ID 1)  but only the last 4 IDs ( 2/3/4/5) Any ideas ?    
Assuming ID is a numeric, your solution should work. You could also try range with window of 2. Here is a runanywhere example demonstrating both techniques | makeresults format=csv data="event,id A,... See more...
Assuming ID is a numeric, your solution should work. You could also try range with window of 2. Here is a runanywhere example demonstrating both techniques | makeresults format=csv data="event,id A,1 B,2 C,4 D,5" | streamstats range(id) as range window=2 | streamstats current=f last(id) as prev_id | eval increment=id-prev_id
Hi I always teach my users that when they change log format they must also rename its sourcetype. e.g. adding a number after it. My standard is name source types like "vendor:system:log-file:increme... See more...
Hi I always teach my users that when they change log format they must also rename its sourcetype. e.g. adding a number after it. My standard is name source types like "vendor:system:log-file:incremental number starting from 0". That way it' s easy just to add 1 to this #. r. Ismo
Hi I suppose that there is no need to select so big instance for working as a HF on AWS. I have used instances with 2-4vCPU without any real issue. IMHO it's better to use couple of smaller instance... See more...
Hi I suppose that there is no need to select so big instance for working as a HF on AWS. I have used instances with 2-4vCPU without any real issue. IMHO it's better to use couple of smaller instances as LB configuration than one huge. Of course if you have some apps like TA-aws / DBX running on those HFs then you must have bigger one and also add some pipelines there too.  Of course you must monitor those and if there are lack of resources or too much delays with event forwarding then add more HFs or increase their size. Easiest this can do with adding those as indexers into MC and us MC to analysing those. r. Ismo
Hi I suppose it. Just create a ticket to Splunk support and ask that they split it based on your given amounts. Probably you must have access to those entitlements on your support portal or you must... See more...
Hi I suppose it. Just create a ticket to Splunk support and ask that they split it based on your given amounts. Probably you must have access to those entitlements on your support portal or you must ask that person who are named your contract contact person will ask it from Splunk. r. Ismo
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want... See more...
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x  | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It cant get ip-dst from misp instance , Can anyone help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Hi as @_JP said, MC is just for monitoring the whole Splunk environment. You really need it in any distributed environment. There are also some usable apps in splunkbase.splunk.com which you could l... See more...
Hi as @_JP said, MC is just for monitoring the whole Splunk environment. You really need it in any distributed environment. There are also some usable apps in splunkbase.splunk.com which you could look e.g. AdminAlerts. One way to avoid running out of disc space on indexers it migrate your environment to use volumes instead of use SPLUNK_DB on all index configurations. That way splunk automatic frozen needed data based on volume size instead of days or index sizes. r. Ismo
Hi as other already said this is doable. Here is my old post how to do it https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062... See more...
Hi as other already said this is doable. Here is my old post how to do it https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 Of course in your case you must take care of connection between those sites. That must be enough fast to get this working. One other notice is that you must have same splunk version on all nodes in cluster. So you must select between update old first or install old version to AWS first. Based on amount of your data and connection speed between sites you could do this or other option is just create a new idx cluster on AWS and after it is in use then create additional (temporary) cluster for old onprem data and add it as a second cluster for your search heads. That way you could utilise e.g. snowball to transfer data from onsite to AWS (if your connection cannot handle it with reasonable time).   r. Ismo
One additional comment. You should use cgroups version 1.0 on RHEL9. See https://docs.splunk.com/Documentation/Splunk/9.1.1/Workloads/Requirements r. Ismo
Hi that should be fixed on all recent splunk products. You could read more from - https://www.splunk.com/en_us/surge/log4shell-log4j-response-overview.html - https://www.splunk.com/en_us/blog/bulle... See more...
Hi that should be fixed on all recent splunk products. You could read more from - https://www.splunk.com/en_us/surge/log4shell-log4j-response-overview.html - https://www.splunk.com/en_us/blog/bulletins/splunk-security-advisory-for-apache-log4j-cve-2021-44228.html r. Ismo
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in parti... See more...
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in particular is having me confused: Is the field PROCESSING_TIME expressed in seconds or in milliseconds? How can I understand what the PROCESSING_TIME corresponds to (is this time for an IDOC to be read by SAP before being sent outbound? or does it have another particular meaning?). Kind regards
I would say this partly covers shipping systemd journal logs to splunk. What I would really love is for splunk to be able to accept data sent by systemd-journal-upload ( https://www.freedesktop.org/... See more...
I would say this partly covers shipping systemd journal logs to splunk. What I would really love is for splunk to be able to accept data sent by systemd-journal-upload ( https://www.freedesktop.org/software/systemd/man/latest/systemd-journal-upload.service.html ). That way you'd not need a forwarder on any popular systemd distribution anymore. You could just use systemd.