All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

FYI this is happening quite randomly it fills it with wrong values (not the value above it) but its quite random, sometimes it working sometimes not 
Noted thanks @ITWhisperer  However looks like its not working as expected This is before filldown   This is after filldown Why is it not populating 2024/09/04 07:54:20.445 from the row... See more...
Noted thanks @ITWhisperer  However looks like its not working as expected This is before filldown   This is after filldown Why is it not populating 2024/09/04 07:54:20.445 from the rows below instead it is filling with 2024/09/04 07:54:52.137  
KPIのみを表示するサービスアナライザーを作成したいのですが、作成することは可能ですか?可能であれば手順を知りたいです。
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, ... See more...
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, and now i'm confused.  Someone help me please
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in t... See more...
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in the index. I have tried to tcpdump and I can see the logs arriving at my Splunk instance. below I attach the syslog configuration and tcpdump result on my splunk instance. What could be the cause of this issue, and what steps should I take to troubleshoot it? Thanks for any insights!
Hi,   The Splunk Heavy Forwarders and Deployment Servers were running under Splunk user. Unfortunately, during the upgrade process, some admin used the root account, and now the these Splunk instan... See more...
Hi,   The Splunk Heavy Forwarders and Deployment Servers were running under Splunk user. Unfortunately, during the upgrade process, some admin used the root account, and now the these Splunk instances are running as root. How can I switch back to the Splunk user? These instances are running on Red Hat Linux.
Yup. As @richgalloway said, if by configuration you mean copying over all the stuff in $SPLUNK_HOME, that means simply copying your whole installation in whatever version it is right now. Since apps ... See more...
Yup. As @richgalloway said, if by configuration you mean copying over all the stuff in $SPLUNK_HOME, that means simply copying your whole installation in whatever version it is right now. Since apps are simply just collections of files, some of them being an important part of the overall configuration, you usually can't just not copy them and expect everything to work as it used to. You probably could just copy over indexed data - that should work. But then you'd need to _at least_ copy over index definitions as well. And probably some datamodel definitions and acceleration configurations (although those you can rebuild but it takest time). And then you'll find yourself wanting to preserve some other configuration items, reports, dashboards and... in the end it turns out it's better to just upgrade the whole thing as it was. You could try to manually isolate the "Splunk config" items, copy them over with indexed data to the new instance and then try to (again - manually) migrate settings from each app separately but that will mean you have to install each app from scratch, check if the app didn't change from the version you use now to a new version (a huge part of your apps probably still uses python 2 so there are definitely changes in the apps themselves). There are some possible paths out of your 7.2.3 but they do involve a lot of effort which you might actually save by doing those multiple upgrades in place.
Experiencing an issue on a few random servers, some Domain Controllers and some Member Servers. Windows Security Event logs just seem to randomly stop sending. If I restart the Splunk UF, then the ev... See more...
Experiencing an issue on a few random servers, some Domain Controllers and some Member Servers. Windows Security Event logs just seem to randomly stop sending. If I restart the Splunk UF, then the event logs start gathering. We are using the Splunk UF 9.1.5, but I also noticed this issue on Splunk UF 9.1.4. I thought it had been corrected when we upgraded to Splunk UF 9.1.5, but its re-appeared - most recent occurrence seemed to occur roughly about 3 weeks ago on 15 servers across multiple clients we manage. This unfortunately has resulted in the loss of data for a few weeks as the local event logs eventually got discarded as the data filled up. I have now written a Splunk Alert to notify us each day if any servers are in this situation (compares the Windows servers reporting into two different indexes, one index is for Windows Security Event logs), so we can more easily spot the issue. We are just raising a case with Splunk support today about the issue.
Hello Members,   I have configured splunk HF to recieve data input as port 1531/udp    i used command firewall-cmd --permanent --zone=public --add-port=1531/udp   but when i used firewall-cmd -... See more...
Hello Members,   I have configured splunk HF to recieve data input as port 1531/udp    i used command firewall-cmd --permanent --zone=public --add-port=1531/udp   but when i used firewall-cmd --list-all dosen't appear on the opening ports is this consider a problem and also checked netstat and the port is listening on 0.0.0.0 (all)   thanks
Hi @devsru , you have to correlate your conditions using the boolean operators AND and OR and the parenthesys, aligned with the logic you need: | eval severity_id=if((Private_MBytes>=20000 AND host... See more...
Hi @devsru , you have to correlate your conditions using the boolean operators AND and OR and the parenthesys, aligned with the logic you need: | eval severity_id=if((Private_MBytes>=20000 AND host IN ("vmd*","vmt*","vmu*")) OR (Private_MBytes>=40000 AND host="vmp*"), 4, 2) Ciao. Giuseppe
One could argue that it's not that _indextime is appropriate here but rather wrong _time is being indexed. But OTOH it makes you miss those several separate time fields from "traditional SIEMs". Like... See more...
One could argue that it's not that _indextime is appropriate here but rather wrong _time is being indexed. But OTOH it makes you miss those several separate time fields from "traditional SIEMs". Like "event time", "receive time", "whatever time". I don't even remember how many separate timestamps ArcSight holds for each event - three? four?
hi @richgalloway  I did change the java setting and corrected the the JAVA home env but no luck. Is there a file where Splunk is storing this info.  Thanks
If I must guess, the use of wildcard characters make your search not returning your desired results? (Syntax-wise, I am not sure if IN operator can use square brackets.)  As you only illustrated two ... See more...
If I must guess, the use of wildcard characters make your search not returning your desired results? (Syntax-wise, I am not sure if IN operator can use square brackets.)  As you only illustrated two values, no need to use case. | eval severity_id=if(Private_MBytes >= 20000 AND searchmatch("host IN (vmd*,vmt*,vmu*)") OR Private_MBytes >= 40000 AND host LIKE "vmp%", 4, 2)  
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp an... See more...
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp and  Private_MBytes > 40000 then it should display the events with severity_id 4. Example eval severity_id=if(Private_MBytes >= "20000" AND host IN [vmd*,vmt*,vmu*],4,2) eval severity_id=if(Private_MBytes >= "40000" AND host ==vmp*,4,2)   Note :  if Private_MBytes > 40000, and then if there is any vmd/vmu/vmt it should display severity_id 4 only and for vmp also.
One problem with transferring $SPLUNK_HOME from one Splunk instance to a newer one is you will be taking the old Splunk built-in apps with you, which would not be a Good Thing. Another potential pro... See more...
One problem with transferring $SPLUNK_HOME from one Splunk instance to a newer one is you will be taking the old Splunk built-in apps with you, which would not be a Good Thing. Another potential problem is you will miss out on the migration actions taken during upgrades.  If you've upgraded Splunk before, have a look at $SPLUNK_HOME/var/log/splunk/migration.log.* to see what is done behind the scenes during an upgrade.  Without that work, you may be carrying useless (or even harmful) cruft to the new version and may miss out on important changes.
@jessieb_83 wrote: I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. You understand incorrectly.  Both have equal precedence.  The first limit reached is the on... See more...
@jessieb_83 wrote: I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. You understand incorrectly.  Both have equal precedence.  The first limit reached is the one that will be applied.  If the size limit is reached then the oldest buckets will be deleted first. If maxTotalDataSizeMB is not specified, it defaults to 500000 (see indexes.conf.spec). Use the dbinspect command to examine your buckets.  Make sure the oldest ones don't have an earliest_time latest_time (endEpoch) that is newer than the frozenTimePeriodInSecs setting.  Buckets will not age out until *all* of the events in the bucket are old enough.
Change the Java home setting in DB Connect and/or correct the $JAVA_HOME environment variable.
Actually indextime has a WONDERFUL, very security relevant use case and that's for events with potentially delayed data. A great example is EDR data; if a user is off network for awhile and the agent... See more...
Actually indextime has a WONDERFUL, very security relevant use case and that's for events with potentially delayed data. A great example is EDR data; if a user is off network for awhile and the agent can't report, when they do finally log on, their events may flow in with the proper timestamps for when the event occurred *however* because we are running our detections on our most recent events, detections will completely miss these. In almost every other case, I'd recommend normal _time. But _indextime is very useful for this usecase. Also can be handy with RBA so notables don't fire as events from the beginning of the time window roll off the detection window despite having already fired a notable and APPEAR unique but throttling can't account for; explained here - https://splunk.github.io/rba/searches/deduplicate_notables/#method-ii
  How do I change the directory path for the error below. the problem is with the /bin/bin in the path.  Any help is greatly appreciated!
I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this ... See more...
I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this specific circumstance, and I haven't been able to find anything else about it. I have requirement to keep all department security logs for 5 years regardless how big the indexes get. They need to delete at 5.1 years. My predecessor set it up so that frozenTimePeriodInSecs= (5.1years in seconds) and maxTotalDataSizeMB =1000000000mb (roughly 1000TB's) so that size would not affect retention, but now nothing will delete and we're retaining logs from 8 years ago. If I comment out  maxTotalDataSizeMB, will frozenTimePeriodInSecs take precedence or will the default  maxTotalDataSizeMB settings take over? My indexes are roughly 7Tb, so the 500Gb would wipe out a bunch of stuff I need to keep.  In my lab environment, I commented out  maxTotalDataSizeMB and set frozenTimePeriodInSecs to 6 months, but still have logs from 2 years ago. Unfortunately, my lab environment doesn't have enough archived data to test the default cut off.    Thanks!