All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Spl isn’t really my strong suit. Would you mind showing me how you’d do it for this UID?
we want to alert when an indexs host count drops by a significant amount... lets say 25%. how would this work? id be struggling to dynamically calculate this based on indexes that are present
Data in Splunk is stored into so called buckets. Each bucket can be in one of four states. Initially it's a hot bucket. Then it gets rolled to warm. Then older buckets get rolled to cold (possibly on... See more...
Data in Splunk is stored into so called buckets. Each bucket can be in one of four states. Initially it's a hot bucket. Then it gets rolled to warm. Then older buckets get rolled to cold (possibly on a different storage volume). And finally as the index or volume reaches its size limit or data age reaches its defined limit the bucket is moved to frozen (by default it means it's simply deleted but it can also be moved onto yet another storage which is completely outside of Splunk's "jurisdiction" - from your Splunk's point of view that data is deleted but can be retained and manually "thawed" later). So yes, as your data ages in Splunk it may reach the point at which it is deleted.
Ahhh... right. It's timechart with a BY clause so you'll have separate series for each UID. So if there is a small, fixed set of those UIDs you can do separate eval for each of those fields; you have... See more...
Ahhh... right. It's timechart with a BY clause so you'll have separate series for each UID. So if there is a small, fixed set of those UIDs you can do separate eval for each of those fields; you have to adjust the names of course. (If you have too many series your timechart will get clogged anyway). Alternatively you can loop over those fields with a foreach command.
Thanks for your support! I'll test it using the full request code and share it with you.
Did you solve this problem? I had the same symptoms and was wondering how you solved it.
When I run the search I dont get a field for Bytesinpercentage index=snmp sourcetype=snmp_attributes Name=ifHCInOctets host=a154 | streamstats current=t global=f window=2 range(Value) AS delta BY U... See more...
When I run the search I dont get a field for Bytesinpercentage index=snmp sourcetype=snmp_attributes Name=ifHCInOctets host=a154 | streamstats current=t global=f window=2 range(Value) AS delta BY UID | eval mbpsIn=delta*8/1024/1024 | append [search index=snmp sourcetype=snmp_attributes Name=ifHCOutOctets host=a154 | streamstats current=t global=f window=2 range(Value) AS delta BY UID | eval mbpsOut=delta*8/1024/1024 ] | search UID=1 | timechart span=5m per_second(mbpsIn) AS MbpsIn per_second(mbpsOut) AS MbpsOut BY UID | eval Bytesinpercentage=MbpsIn/10*1024*1024
Hello @fsoengen , I'm using start-dbagent script to start. And in the starting script it's included the java.library.path path. See the log below. 26 Mar 2025 10:35:12,187 INFO [main] Agent: - JVM... See more...
Hello @fsoengen , I'm using start-dbagent script to start. And in the starting script it's included the java.library.path path. See the log below. 26 Mar 2025 10:35:12,187 INFO [main] Agent: - JVM Args : --add-opens=java.base/java.lang=ALL-UNNAMED | --add-opens=java.base/java.security=ALL-UNNAMED | -XX:+HeapDumpOnOutOfMemoryError | -XX:OnOutOfMemoryError=taskkill /F /PID %p | -Djava.library.path=C:\dbagent\db-agent-25.1.0.4748\auth\x64 | -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector | -Ddbagent.name=database_agent |   Thanks, Luiz Polli
"There was too much data, or it was too old, causing the oldest events to be pushed out of the indexes." Regarding this line, how is it relevant if it is too much data and will it matter if the da... See more...
"There was too much data, or it was too old, causing the oldest events to be pushed out of the indexes." Regarding this line, how is it relevant if it is too much data and will it matter if the data of the event log itself is old however it was only indexed 1 week ago? Does this mean that some event logs could be overwritten?
I am pretty sure that it wont roll to frozen state yet as it was just indexed a week ago and the retention policy is over 6 months. I never even used the delete command before. And yes I have perm... See more...
I am pretty sure that it wont roll to frozen state yet as it was just indexed a week ago and the retention policy is over 6 months. I never even used the delete command before. And yes I have permissions.
I only edited the inputs.conf in a specific app
Hello @livehybrid , Yes, I have created the "os" index in my indexer. I can see in the _internal index logs for these hosts.
Hi @dania_abujuma  Just to check - have you created the "os" index on your indexers? Are you able to see the _internal logs for these forwarders? This will help determine if the issue is sending, o... See more...
Hi @dania_abujuma  Just to check - have you created the "os" index on your indexers? Are you able to see the _internal logs for these forwarders? This will help determine if the issue is sending, or receiving the data.  Do you see any reference to these inputs (and any errors?) in the $SPLUNK_HOME/var/log/splunk/splunkd.log file? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello Splunkers! I am looking for a way to collect the SunOS-SPARC OS logs. After some research, I have tried to update the inputs.conf in the Splunk Add-on for Unix and Linux ( https://splunkbase.s... See more...
Hello Splunkers! I am looking for a way to collect the SunOS-SPARC OS logs. After some research, I have tried to update the inputs.conf in the Splunk Add-on for Unix and Linux ( https://splunkbase.splunk.com/app/833 ), as below (this is a snippet of the config file not all of it) : # Currently only supports SunOS, Linux, OSX. # May require Splunk forwarder to run as root on some platforms. [script://./bin/service.sh] disabled = 0 interval = 3600 source = Unix:Service sourcetype = Unix:Service index = os # Currently only supports SunOS, Linux, OSX. # May require Splunk forwarder to run as root on some platforms. [script://./bin/sshdChecker.sh] disabled = 0 interval = 3600 source = Unix:SSHDConfig sourcetype = Unix:SSHDConfig index = os # Currently only supports Linux, OSX. # May require Splunk forwarder to run as root on some platforms. [script://./bin/update.sh] disabled = 0 interval = 86400 source = Unix:Update sourcetype = Unix:Update index = os [script://./bin/uptime.sh] disabled = 0 interval = 86400 source = Unix:Uptime sourcetype = Unix:Uptime index = os [script://./bin/version.sh] disabled = 0 This didn't work and no logs were collected (I have made sure the user running Splunk forwarder has read privilege), is there any other recommendation?
Hi @luizpolli, just to make sure, did you follow all the steps lined out in the documentation? Especially, adding the java.library.path system property when starting database agent: java -Djava.li... See more...
Hi @luizpolli, just to make sure, did you follow all the steps lined out in the documentation? Especially, adding the java.library.path system property when starting database agent: java -Djava.library.path="C:\dbagent_install_dir\auth\x64" -jar db-agent.jar https://docs.appdynamics.com/appd/24.x/25.3/en/database-visibility/add-database-collectors/configure-microsoft-sql-server-collectors#id-.ConfigureMicrosoftSQLServerCollectorsv24.10-BeforeyouBegin Best, Franz Edit: I somehow didn't notice the no mssql-jdbc_auth-8.4.0.x64 in java.library.path: C:\dbagent\db-agent-25.1.0.4748\auth\x64 error message. Seems like this is set up already.
Hi, you could set up a health rule of type "Tier / Node Health - Hardware" for the respective tiers and test for metric "Agent|App|Availability". You could either set it up on a node level and test... See more...
Hi, you could set up a health rule of type "Tier / Node Health - Hardware" for the respective tiers and test for metric "Agent|App|Availability". You could either set it up on a node level and test for value != 1 to get alerts for every missing node or you could set it up on a tier level using "< Specific Value" for the expected amount of nodes in that tier. A "< baseline" condition might also work for this use case. Under Conditions, you should also set Evaluation for "no data" scenarios to Critical or Warning to actually get alerts when no tier/node is reporting because it will not send 0 but instead there will not be any value at all. Best, Franz
The updates mentioned in the scenario are application updates/patching. Not related to appdynamics agent upgrades.
Hi Experts, I have a scenario in which there are 10 tiers associated with an application. After updates/patching, some tiers are missing in the APpDynamics console. So is there any way we can create... See more...
Hi Experts, I have a scenario in which there are 10 tiers associated with an application. After updates/patching, some tiers are missing in the APpDynamics console. So is there any way we can create an alert when this scenario happens? Is a health rule with adding tiers manually will help to get an alert, as I don't have a real-time scenario to test. Thanks.
Hi @chenfan , let me understand: you want to use a Splunk server to send logs outside Splunk, is this correct? I suppose that this HF is used to send logs to a Splunk instance and also to a third p... See more...
Hi @chenfan , let me understand: you want to use a Splunk server to send logs outside Splunk, is this correct? I suppose that this HF is used to send logs to a Splunk instance and also to a third party, not only to a third party because in this case there's no sense in this architecture. anyway, to send logs to a third party, using syslogs, you can see at https://docs.splunk.com/Documentation/SplunkCloud/latest/Forwarding/Forwarddatatothird-partysystemsd In addition, I hint to use rsyslog to receive sylogs and not the Splunk HF, instead the Splunk HF can be used to forward logs to the primary Splunk instance and also to the third party. If instead you want to receive syslogs and forward them only to a third party use only rsyslog and another tool as logger or something similar. Ciao. Giuseppe