All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @devsru , you have to correlate your conditions using the boolean operators AND and OR and the parenthesys, aligned with the logic you need: | eval severity_id=if((Private_MBytes>=20000 AND host... See more...
Hi @devsru , you have to correlate your conditions using the boolean operators AND and OR and the parenthesys, aligned with the logic you need: | eval severity_id=if((Private_MBytes>=20000 AND host IN ("vmd*","vmt*","vmu*")) OR (Private_MBytes>=40000 AND host="vmp*"), 4, 2) Ciao. Giuseppe
One could argue that it's not that _indextime is appropriate here but rather wrong _time is being indexed. But OTOH it makes you miss those several separate time fields from "traditional SIEMs". Like... See more...
One could argue that it's not that _indextime is appropriate here but rather wrong _time is being indexed. But OTOH it makes you miss those several separate time fields from "traditional SIEMs". Like "event time", "receive time", "whatever time". I don't even remember how many separate timestamps ArcSight holds for each event - three? four?
hi @richgalloway  I did change the java setting and corrected the the JAVA home env but no luck. Is there a file where Splunk is storing this info.  Thanks
If I must guess, the use of wildcard characters make your search not returning your desired results? (Syntax-wise, I am not sure if IN operator can use square brackets.)  As you only illustrated two ... See more...
If I must guess, the use of wildcard characters make your search not returning your desired results? (Syntax-wise, I am not sure if IN operator can use square brackets.)  As you only illustrated two values, no need to use case. | eval severity_id=if(Private_MBytes >= 20000 AND searchmatch("host IN (vmd*,vmt*,vmu*)") OR Private_MBytes >= 40000 AND host LIKE "vmp%", 4, 2)  
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp an... See more...
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp and  Private_MBytes > 40000 then it should display the events with severity_id 4. Example eval severity_id=if(Private_MBytes >= "20000" AND host IN [vmd*,vmt*,vmu*],4,2) eval severity_id=if(Private_MBytes >= "40000" AND host ==vmp*,4,2)   Note :  if Private_MBytes > 40000, and then if there is any vmd/vmu/vmt it should display severity_id 4 only and for vmp also.
One problem with transferring $SPLUNK_HOME from one Splunk instance to a newer one is you will be taking the old Splunk built-in apps with you, which would not be a Good Thing. Another potential pro... See more...
One problem with transferring $SPLUNK_HOME from one Splunk instance to a newer one is you will be taking the old Splunk built-in apps with you, which would not be a Good Thing. Another potential problem is you will miss out on the migration actions taken during upgrades.  If you've upgraded Splunk before, have a look at $SPLUNK_HOME/var/log/splunk/migration.log.* to see what is done behind the scenes during an upgrade.  Without that work, you may be carrying useless (or even harmful) cruft to the new version and may miss out on important changes.
@jessieb_83 wrote: I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. You understand incorrectly.  Both have equal precedence.  The first limit reached is the on... See more...
@jessieb_83 wrote: I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. You understand incorrectly.  Both have equal precedence.  The first limit reached is the one that will be applied.  If the size limit is reached then the oldest buckets will be deleted first. If maxTotalDataSizeMB is not specified, it defaults to 500000 (see indexes.conf.spec). Use the dbinspect command to examine your buckets.  Make sure the oldest ones don't have an earliest_time latest_time (endEpoch) that is newer than the frozenTimePeriodInSecs setting.  Buckets will not age out until *all* of the events in the bucket are old enough.
Change the Java home setting in DB Connect and/or correct the $JAVA_HOME environment variable.
Actually indextime has a WONDERFUL, very security relevant use case and that's for events with potentially delayed data. A great example is EDR data; if a user is off network for awhile and the agent... See more...
Actually indextime has a WONDERFUL, very security relevant use case and that's for events with potentially delayed data. A great example is EDR data; if a user is off network for awhile and the agent can't report, when they do finally log on, their events may flow in with the proper timestamps for when the event occurred *however* because we are running our detections on our most recent events, detections will completely miss these. In almost every other case, I'd recommend normal _time. But _indextime is very useful for this usecase. Also can be handy with RBA so notables don't fire as events from the beginning of the time window roll off the detection window despite having already fired a notable and APPEAR unique but throttling can't account for; explained here - https://splunk.github.io/rba/searches/deduplicate_notables/#method-ii
  How do I change the directory path for the error below. the problem is with the /bin/bin in the path.  Any help is greatly appreciated!
I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this ... See more...
I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this specific circumstance, and I haven't been able to find anything else about it. I have requirement to keep all department security logs for 5 years regardless how big the indexes get. They need to delete at 5.1 years. My predecessor set it up so that frozenTimePeriodInSecs= (5.1years in seconds) and maxTotalDataSizeMB =1000000000mb (roughly 1000TB's) so that size would not affect retention, but now nothing will delete and we're retaining logs from 8 years ago. If I comment out  maxTotalDataSizeMB, will frozenTimePeriodInSecs take precedence or will the default  maxTotalDataSizeMB settings take over? My indexes are roughly 7Tb, so the 500Gb would wipe out a bunch of stuff I need to keep.  In my lab environment, I commented out  maxTotalDataSizeMB and set frozenTimePeriodInSecs to 6 months, but still have logs from 2 years ago. Unfortunately, my lab environment doesn't have enough archived data to test the default cut off.    Thanks!
A common approach is to use fillnull.   index=abc | rex field=MESSAGE "aaa(?<FIELD1>bbb)" | rex field=MESSAGE "ccc(?<FIELD2>ddd)" | fillnull FIELD1 FIELD2 value=UNSPEC | stats count by FIELD1, FIE... See more...
A common approach is to use fillnull.   index=abc | rex field=MESSAGE "aaa(?<FIELD1>bbb)" | rex field=MESSAGE "ccc(?<FIELD2>ddd)" | fillnull FIELD1 FIELD2 value=UNSPEC | stats count by FIELD1, FIELD2 | foreach FIELD1 FIELD2 [eval <<FIELD>> = if(<<FIELD>> == "UNSPEC", null(), <<FIELD>>)]   This is a made-up dataset based on your regex. MESSAGE aaabbbcccddd aaabbbcccdef aaabccccddd abcdefg The above method gives FIELD1 FIELD2 count     1   ddd 1 bbb   1 bbb ddd 1 Here is an emulation to produce this data   | makeresults format=csv data="MESSAGE aaabbbcccddd aaabbbcccdef aaabccccddd abcdefg" ``` the above emulates index=abc ```   Play with it and compare with real data.
Right you are, it was a misconfigured fw on the hosts.
Hi, I was got this issue also, direct access is able to connect but with splunk invalid error, even after add pg_hba.conf FATAL: no pg_hba.conf entry for host "10.24.154.215", user "AMDSPLUNK", dat... See more...
Hi, I was got this issue also, direct access is able to connect but with splunk invalid error, even after add pg_hba.conf FATAL: no pg_hba.conf entry for host "10.24.154.215", user "AMDSPLUNK", database "IOU", SSL on   Actually the issue it was because drivers compatibility, in this case we can try bellow drivers : 1. Splunk DBX Add-on for Postgres JDBC | Splunkbase -> Add on, works for me  2. About the JDBC Driver for Postgres - Splunk Documentation -> With .jar file, not works, but in other environment its work.   You can try it, hopefully will help 
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also... See more...
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also since our current version is running on RedHat version 6.4,  I would have to upgrade that to get be able to run the current version What I am curious about is, AWS has a Splunk 9.3.0 AMI with BYOL.   Would it be possible to migrate the data over to the new instance along with the configuration settings?  This is used as a customer lab so we only have about a dozen universal forwarders pointing to this server.  There are no alerts running on it and only 3 dashboards. The splunk home is stored on a separate volume than the OS so I could detach it from the old instance and attach it to the new one, or snapshot it and use the snapshot on the new one.   Any suggestions for this? Thanks.
Again - Firstly, check with tcpdump that your events do reach your destination host. If you don't see the data on the wire no magic within the OS will make it appear out of thin air.
We have setup RHEL 8.10 to be our new Splunk instance.  As before on CentOS Stream, we get syslog data from everything except the VMware host syslog data... We still have the Windows Splunk server ... See more...
We have setup RHEL 8.10 to be our new Splunk instance.  As before on CentOS Stream, we get syslog data from everything except the VMware host syslog data... We still have the Windows Splunk server around, and if we change the Syslog.global.logHost key in the Advanced System Settings on each host back to the Windows Splunk server, then the syslog data from the hosts shows up. It appears that if splunkd is running under the splunk user, then a port forwarding solution would be required to forward to a higher port for syslog.  However, splunkd is running as root, not the splunk user. Years ago, we ran Splunk on CentOS 7 and never had this issue. Is the port forwarding solution the answer here?
I think the db.system would have to match one of the systems listed in the supported databases list. On the backend, there is likely an "allow list" that checks if the database system is supported fo... See more...
I think the db.system would have to match one of the systems listed in the supported databases list. On the backend, there is likely an "allow list" that checks if the database system is supported for Query Performance before it will show up in the UI. What is the value of your db.system when you use this clickhouse driver?
We saw this when we updated a saved search to be triggered "for each result" (i.e. alert.digest_mode = 0) as opposed to the default "once" (alert.digest_mode = 1). This caused the result links to sta... See more...
We saw this when we updated a saved search to be triggered "for each result" (i.e. alert.digest_mode = 0) as opposed to the default "once" (alert.digest_mode = 1). This caused the result links to start using the loadjob command.   
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a co... See more...
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a correlation search ) and when a notable event is generated based on the correlation search , I have tried something below but it does not give me results I am expecting because it is not calculating time difference for those notables which are in New status , below is working fine for any other status . Can someone please help me on this , may be it is too simple to achieve and I am making this complex  index=notable | eval orig_epoch=if( NOT isnum(orig_time), strptime(orig_time, "%m/%d/%Y %H:%M:%S"), 'orig_time' ) | eval event_epoch_standardized= orig_epoch, diff_seconds='_time'-'event_epoch_standardized' | fields + _time, search_name, diff_seconds | stats count as notable_count, min(diff_seconds) as min_diff_seconds, max(diff_seconds) as max_diff_seconds, avg(diff_seconds) as avg_diff_seconds by search_name | eval avg_diff=tostring(avg_diff_seconds, "duration") | addcoltotals labelfield=search_name