All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could try using tokens as parts of URLs. If all else fails, you could of course try to write custom JS (if it's a SimpleXML dashboard; won't work for Dashboard Studio dashboards AFAIR) but it's n... See more...
You could try using tokens as parts of URLs. If all else fails, you could of course try to write custom JS (if it's a SimpleXML dashboard; won't work for Dashboard Studio dashboards AFAIR) but it's not a pretty solution.
it's a very broad description. And don't worry about lack of Enterprise Security app. Yes, it's great and has many useful functionalities but you can do quite a lot with Splunk Enterprise on its own... See more...
it's a very broad description. And don't worry about lack of Enterprise Security app. Yes, it's great and has many useful functionalities but you can do quite a lot with Splunk Enterprise on its own. The "problem" here is that you don't know what you need. Think what data you have and what can tell you of a possible attack. You can use Security Essentials app for inspiration. But please, don't do "checkbox security" meaning "just write something that seems to satisfy some literal requirement in the least-effort way possible" so that you can cross it off your todo list. That actually impairs your security posture.
The Ingestion Latency indicator is based on "checkpoint" files generated by the forwarders. The file (var/spool/tracker/tracker.log) is periodically generated on a UF and contains a timestamp which i... See more...
The Ingestion Latency indicator is based on "checkpoint" files generated by the forwarders. The file (var/spool/tracker/tracker.log) is periodically generated on a UF and contains a timestamp which is compared by Splunk aftern ingestion to see how long it took for that file to reach the indexer. There is one possibility when the alert on latency is a false positive - sometimes the input doesn't properly delete the file when ingesting its contents so new timestamps are appended to the end of the file. It happened to me once or twice. But other than that latency  warning simply means that it takes "too long" for the data to get from being read by UF to bing indexed by the indexers. The possible reasons include: 1. Load on the forwarder (this is usually not an issue if you're ingesting logs from a server which normally does some other production work and you only ingest its own logs but might be an issue if you have a "log gatherer" setup receiving logs from a wide environment. 2. Throttling on output due to bandwidth limits. 3. Need to ingest a big backlog of events (can happen if the UF wasn't running for some time or if you're installing a fresh UF on a host which was running and already produced logs which you want ingested). 4. Connectivity/configuration problems preventng UF from sending the buffered data to indexers. 5. Blocked receivers due to performance problems.
It seems perfectly acceleratable. The bin command is a streaming one so the requirements for only streaming commands before first transforming command is fulfilled. You could try to use summary inde... See more...
It seems perfectly acceleratable. The bin command is a streaming one so the requirements for only streaming commands before first transforming command is fulfilled. You could try to use summary indexing here though instead of report acceleration - this would give you more flexibility in using the summarized data later should you need to reference it in other searches.
Hi community,   I'm wondering if it's possible to forward specific index in splunk to other third-party systems or SIEM such as Qradar or any other SIEM  i have read something about HF that it's p... See more...
Hi community,   I'm wondering if it's possible to forward specific index in splunk to other third-party systems or SIEM such as Qradar or any other SIEM  i have read something about HF that it's possible but i don't understand it fully     if Yes, please give me approach to do this .. thanks
OK, but what user/password are you putting in? It should be your splunk.com account (possibly the one you use for posting on Answers) because underneath the server logs in to splunkbase with your cre... See more...
OK, but what user/password are you putting in? It should be your splunk.com account (possibly the one you use for posting on Answers) because underneath the server logs in to splunkbase with your credentials. Of course you need to have access to internet from your Splunk server (I assume you use an all-in-one setup).
Delta is a relatively simple command - just calculates difference from previous value. Nothing more, nothing less. If you want to track the differences separately for - for example - different devic... See more...
Delta is a relatively simple command - just calculates difference from previous value. Nothing more, nothing less. If you want to track the differences separately for - for example - different devices, you need to use streamstats to copy over previous value of a given field X separetely for each value of field Y (or a combination of more fields). | streamstats current=f window=1 values(myfield) as old_myfield by splitfield Now you can just calculate the difference of myfield and old_myfield.
hi I manage to monitor the servers divided into services via the ITSI. However, I would like to receive email alerts when some of my servers change state, either inactive or unstable, for better re... See more...
hi I manage to monitor the servers divided into services via the ITSI. However, I would like to receive email alerts when some of my servers change state, either inactive or unstable, for better reactivity.
Until you can tell us what data you have, what field/value in that data indicates inactive and unstable entities, and how you want the output to look like, volunteers are not going to help you.
Hi @Cyner__, If both devices are connected to your Zyxel access point / router using WiFi, make sure layer-2 isolation is correctly configured for the devices to communicate. You should be able to f... See more...
Hi @Cyner__, If both devices are connected to your Zyxel access point / router using WiFi, make sure layer-2 isolation is correctly configured for the devices to communicate. You should be able to find instructions for configuring isolation white lists in your Zyxel documentation.
Hi @reza, Contact Splunk support and let them know your splunk.com account is not working correctly with the Splunkbase API when using Splunk Web to install or upgrade apps. They'll need to correct ... See more...
Hi @reza, Contact Splunk support and let them know your splunk.com account is not working correctly with the Splunkbase API when using Splunk Web to install or upgrade apps. They'll need to correct the issue on the backend.
Hi @alex8103, If your measurements are cumulative,  you can use either a simple stats range aggregation or a combination of streamstats and stats, assuming a valid epoch _time value: | stats range(... See more...
Hi @alex8103, If your measurements are cumulative,  you can use either a simple stats range aggregation or a combination of streamstats and stats, assuming a valid epoch _time value: | stats range(_time) as dt range(W) as dW by device | eval kWh=(dW/1000)*(dt/3600) | sort 0 _time | streamstats current=f global=f window=2 last(_time) as pre_time last(W) as pre_W by device | eval dt=_time-pre_time, dW=W-pre_W | stats sum(dW) as dW sum(dt) as dt by device | eval kWh=(dW/1000)*(dt/3600) If you want to chart differences between cumulative measurements over _time, you can use streamstats and timechart: | sort 0 _time | streamstats current=f global=f window=2 last(_time) as pre_time last(W) as pre_W by device | eval dt=_time-pre_time, dW=W-pre_W | timechart eval((sum(dW)/1000)*(sum(dt)/3600)) as kWh by device The timechart command snaps values to the nearest bin. If you need a more precise chart, use a span  argument corresponding to your time measurement precision. (I don't work with power measurements. If I did the admittedly very basic math incorrectly, please correct it in a reply!)
Hi @jpillai, It should work as written, although you don't need the extra fields command. What process did you use to accelerate the report? If you used Splunk Web, were any errors reported by the ... See more...
Hi @jpillai, It should work as written, although you don't need the extra fields command. What process did you use to accelerate the report? If you used Splunk Web, were any errors reported by the user interface?
Hi guys! how to proceed to create alerts on inactive and unstable entities .
Hi richgalloway, Thanks a lot for you prompt response! It works. Many thanks, Kenji
Try putting the field name in single quotes so Splunk knows it's a field and not something else. index=gnmi name=ethernet_counter tags.source=sri-devgrp-prert00 earliest=06/08/2024:08:00:00 latest=0... See more...
Try putting the field name in single quotes so Splunk knows it's a field and not something else. index=gnmi name=ethernet_counter tags.source=sri-devgrp-prert00 earliest=06/08/2024:08:00:00 latest=06/08/2024:09:22:00 | sort _time | streamstats current=f last(values.in65To127OctetFrames) as previous_value by tags.interface_name | eval value_diff = 'values.in65To127OctetFrames' - previous_value | table _time tags.interface_name value_diff
Hi all, I want to find the difference between two values (values.in65To127OctetFrames). My data is like below. {"name":"ethernet_counter","timestamp":1717838243109,"tags":{"interface_name":"Ethern... See more...
Hi all, I want to find the difference between two values (values.in65To127OctetFrames). My data is like below. {"name":"ethernet_counter","timestamp":1717838243109,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922198453881}} {"name":"ethernet_counter","timestamp":1717837943109,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922102453899}} {"name":"ethernet_counter","timestamp":1717837643345,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922006507704}} I tried the following SPL, but I received "Error in 'EvalCommand': Type checking failed. '-' only takes numbers.". index=gnmi name=ethernet_counter tags.source=sri-devgrp-prert00 earliest=06/08/2024:08:00:00 latest=06/08/2024:09:22:00 | sort _time | streamstats current=f last(values.in65To127OctetFrames) as previous_value by tags.interface_name | eval value_diff = values.in65To127OctetFrames - previous_value | table _time tags.interface_name value_diff I am very new to splunk. Could someone help me to write a proper SPL? Many thanks, Kenji
Hello everyone, I use the Delta command in splunk enterprise to record the power consumption of a device. This only gives me the difference in consumption. Now, however, I want to add 3 more devices... See more...
Hello everyone, I use the Delta command in splunk enterprise to record the power consumption of a device. This only gives me the difference in consumption. Now, however, I want to add 3 more devices to the same diagram, so the whole thing should be added up to a total consumption. Is this possible with Delta, and if so, how? Which commands do I need for this? Greetings Alex
As you are not able to use Splunk ES(SIEM) you can still do a lot of security monitoring with Splunk. At a high level:  I would first look at the various data sources, that give you the data you ... See more...
As you are not able to use Splunk ES(SIEM) you can still do a lot of security monitoring with Splunk. At a high level:  I would first look at the various data sources, that give you the data you want such as Windows & Linux Authentication and Access Linux Logs (event logs in Windows - Security Event Log and various secure logs in Linux - /var/log/auth.log /var/log/secure / /var/log/audit/audit.log etc).  For File level access Windows or Linux, you need Audit logging enabled, so you will need to ensure this data is in the logs Its best to work with your security team / OS admins, define and ensure the logs you want are set, and start to ingest the data into Splunk as per standard methods. Once you have data/log ingested into Splunk you can, analyse the data and begin to develop the SPL Query. I would start by looking at the Splunk Security Essentials - this provide many uses cases and the SPL code. Yes, many are related to Splunk ES(SIEM), but you can still begin to look at some of the basic ones with SPL, example  Brute Force Detection this will show some out of the box SPL, from there you can use and develop and look at others. Once you have the ones suitable for you environment,  and you have tested them, you can set reports and email alerts via Splunk.  Download this app, it’s to help with security use cases and then some more. (Its not a monitoring app) This is the app https://splunkbase.splunk.com/app/3435  This is getting started with SE  https://lantern.splunk.com/Security/Getting_Started/Getting_started_with_Splunk_Security_Essentials  Use Case Explorer - helps with more use cases https://lantern.splunk.com/Security/UCE 
On the HF place the file on there (Easy) /opt/splunk/etc/system/local/props.conf /opt/splunk/etc/system/local/transforms.conf If you know how to create a custom TA app, then that's better - this... See more...
On the HF place the file on there (Easy) /opt/splunk/etc/system/local/props.conf /opt/splunk/etc/system/local/transforms.conf If you know how to create a custom TA app, then that's better - this will give you a start - but you need to understand the app structure https://dev.splunk.com/enterprise/tutorials/quickstart_old/createyourfirstapp/ If you manage to create the app, then place it in /opt/splunk/etc/apps/ on the HF