All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Cyner__, If both devices are connected to your Zyxel access point / router using WiFi, make sure layer-2 isolation is correctly configured for the devices to communicate. You should be able to f... See more...
Hi @Cyner__, If both devices are connected to your Zyxel access point / router using WiFi, make sure layer-2 isolation is correctly configured for the devices to communicate. You should be able to find instructions for configuring isolation white lists in your Zyxel documentation.
Hi @reza, Contact Splunk support and let them know your splunk.com account is not working correctly with the Splunkbase API when using Splunk Web to install or upgrade apps. They'll need to correct ... See more...
Hi @reza, Contact Splunk support and let them know your splunk.com account is not working correctly with the Splunkbase API when using Splunk Web to install or upgrade apps. They'll need to correct the issue on the backend.
Hi @alex8103, If your measurements are cumulative,  you can use either a simple stats range aggregation or a combination of streamstats and stats, assuming a valid epoch _time value: | stats range(... See more...
Hi @alex8103, If your measurements are cumulative,  you can use either a simple stats range aggregation or a combination of streamstats and stats, assuming a valid epoch _time value: | stats range(_time) as dt range(W) as dW by device | eval kWh=(dW/1000)*(dt/3600) | sort 0 _time | streamstats current=f global=f window=2 last(_time) as pre_time last(W) as pre_W by device | eval dt=_time-pre_time, dW=W-pre_W | stats sum(dW) as dW sum(dt) as dt by device | eval kWh=(dW/1000)*(dt/3600) If you want to chart differences between cumulative measurements over _time, you can use streamstats and timechart: | sort 0 _time | streamstats current=f global=f window=2 last(_time) as pre_time last(W) as pre_W by device | eval dt=_time-pre_time, dW=W-pre_W | timechart eval((sum(dW)/1000)*(sum(dt)/3600)) as kWh by device The timechart command snaps values to the nearest bin. If you need a more precise chart, use a span  argument corresponding to your time measurement precision. (I don't work with power measurements. If I did the admittedly very basic math incorrectly, please correct it in a reply!)
Hi @jpillai, It should work as written, although you don't need the extra fields command. What process did you use to accelerate the report? If you used Splunk Web, were any errors reported by the ... See more...
Hi @jpillai, It should work as written, although you don't need the extra fields command. What process did you use to accelerate the report? If you used Splunk Web, were any errors reported by the user interface?
Hi guys! how to proceed to create alerts on inactive and unstable entities .
Hi richgalloway, Thanks a lot for you prompt response! It works. Many thanks, Kenji
Try putting the field name in single quotes so Splunk knows it's a field and not something else. index=gnmi name=ethernet_counter tags.source=sri-devgrp-prert00 earliest=06/08/2024:08:00:00 latest=0... See more...
Try putting the field name in single quotes so Splunk knows it's a field and not something else. index=gnmi name=ethernet_counter tags.source=sri-devgrp-prert00 earliest=06/08/2024:08:00:00 latest=06/08/2024:09:22:00 | sort _time | streamstats current=f last(values.in65To127OctetFrames) as previous_value by tags.interface_name | eval value_diff = 'values.in65To127OctetFrames' - previous_value | table _time tags.interface_name value_diff
Hi all, I want to find the difference between two values (values.in65To127OctetFrames). My data is like below. {"name":"ethernet_counter","timestamp":1717838243109,"tags":{"interface_name":"Ethern... See more...
Hi all, I want to find the difference between two values (values.in65To127OctetFrames). My data is like below. {"name":"ethernet_counter","timestamp":1717838243109,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922198453881}} {"name":"ethernet_counter","timestamp":1717837943109,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922102453899}} {"name":"ethernet_counter","timestamp":1717837643345,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922006507704}} I tried the following SPL, but I received "Error in 'EvalCommand': Type checking failed. '-' only takes numbers.". index=gnmi name=ethernet_counter tags.source=sri-devgrp-prert00 earliest=06/08/2024:08:00:00 latest=06/08/2024:09:22:00 | sort _time | streamstats current=f last(values.in65To127OctetFrames) as previous_value by tags.interface_name | eval value_diff = values.in65To127OctetFrames - previous_value | table _time tags.interface_name value_diff I am very new to splunk. Could someone help me to write a proper SPL? Many thanks, Kenji
Hello everyone, I use the Delta command in splunk enterprise to record the power consumption of a device. This only gives me the difference in consumption. Now, however, I want to add 3 more devices... See more...
Hello everyone, I use the Delta command in splunk enterprise to record the power consumption of a device. This only gives me the difference in consumption. Now, however, I want to add 3 more devices to the same diagram, so the whole thing should be added up to a total consumption. Is this possible with Delta, and if so, how? Which commands do I need for this? Greetings Alex
As you are not able to use Splunk ES(SIEM) you can still do a lot of security monitoring with Splunk. At a high level:  I would first look at the various data sources, that give you the data you ... See more...
As you are not able to use Splunk ES(SIEM) you can still do a lot of security monitoring with Splunk. At a high level:  I would first look at the various data sources, that give you the data you want such as Windows & Linux Authentication and Access Linux Logs (event logs in Windows - Security Event Log and various secure logs in Linux - /var/log/auth.log /var/log/secure / /var/log/audit/audit.log etc).  For File level access Windows or Linux, you need Audit logging enabled, so you will need to ensure this data is in the logs Its best to work with your security team / OS admins, define and ensure the logs you want are set, and start to ingest the data into Splunk as per standard methods. Once you have data/log ingested into Splunk you can, analyse the data and begin to develop the SPL Query. I would start by looking at the Splunk Security Essentials - this provide many uses cases and the SPL code. Yes, many are related to Splunk ES(SIEM), but you can still begin to look at some of the basic ones with SPL, example  Brute Force Detection this will show some out of the box SPL, from there you can use and develop and look at others. Once you have the ones suitable for you environment,  and you have tested them, you can set reports and email alerts via Splunk.  Download this app, it’s to help with security use cases and then some more. (Its not a monitoring app) This is the app https://splunkbase.splunk.com/app/3435  This is getting started with SE  https://lantern.splunk.com/Security/Getting_Started/Getting_started_with_Splunk_Security_Essentials  Use Case Explorer - helps with more use cases https://lantern.splunk.com/Security/UCE 
On the HF place the file on there (Easy) /opt/splunk/etc/system/local/props.conf /opt/splunk/etc/system/local/transforms.conf If you know how to create a custom TA app, then that's better - this... See more...
On the HF place the file on there (Easy) /opt/splunk/etc/system/local/props.conf /opt/splunk/etc/system/local/transforms.conf If you know how to create a custom TA app, then that's better - this will give you a start - but you need to understand the app structure https://dev.splunk.com/enterprise/tutorials/quickstart_old/createyourfirstapp/ If you manage to create the app, then place it in /opt/splunk/etc/apps/ on the HF
@deepakc  Actually we are using Splunk Cloud so no indexer . However we are sending the UF to an HF and then to Splunk Cloud ,so i believe we can test this on the HF and see .Also i realized that se... See more...
@deepakc  Actually we are using Splunk Cloud so no indexer . However we are sending the UF to an HF and then to Splunk Cloud ,so i believe we can test this on the HF and see .Also i realized that second empty event does have date and time being captured ,so practically its not empty event , it just does not have any valuable info .So based on your method i am planning to use this config .Hope this works ? But i have one doubt where in HF should i place it .Since its only function is to drop events .Is it in /opt/splunk/etc/system or in the /opt/splunk/etc/apps ? # props.conf [sql:audit] TRANSFORMS-null_events = strip_null_events # transforms.conf [strip_null_events] REGEX = ^\d{2}/\d{2}/\d{4} \d{1,2}:\d{2}:\d{2} [AP]M$ DEST_KEY = queue FORMAT = nullQueue
OK. Thanks for you help, gcusello.
So you are trying to exclude any event from a host if it has 4608 in the past 5 minutes.  Try index =windows product=Windows (EventCode="4609" OR EventCode="4608" OR EventCode="6008") NOT [sear... See more...
So you are trying to exclude any event from a host if it has 4608 in the past 5 minutes.  Try index =windows product=Windows (EventCode="4609" OR EventCode="4608" OR EventCode="6008") NOT [search index =windows product=Windows EventCode=4608 earliest=-5m | stats values(host) as host] | table _time name host dvc EventCode severity Message  
Line breaking describes MAX_EVENTS thus: MAX_EVENTS = <integer> * The maximum number of input lines to add to any event. * Splunk software breaks after it reads the specified number of lines. * Defa... See more...
Line breaking describes MAX_EVENTS thus: MAX_EVENTS = <integer> * The maximum number of input lines to add to any event. * Splunk software breaks after it reads the specified number of lines. * Default: 256 I looked at my broken events, maximum number of lines seems to be 257.  Knowing some of my outputs are > 1000, I added MAX_EVENTS = 2000 to the sourcetype.  Now I am seeing new events with large number of lines, no more broken events. (It took some time for this change to take effect, though.) Just to be clear: This is unrelated to REST API receivers/simple endpoint, merely a matter of lines in individual events.  The limit is set in props.conf per source type; that is why I could not find any applicable setting in limit.conf.
Thanks very much, I raelly appreciate your help. | chatgpt org=Sirt key=Splunk-Mdc  that was finally working.
i have three drop down lists. one with component(A,B,C,D). other dropdown with severity(Info,Warning) and colour dropdown list. If i select A,Info - colour dropdownlsit should be shown if i select ... See more...
i have three drop down lists. one with component(A,B,C,D). other dropdown with severity(Info,Warning) and colour dropdown list. If i select A,Info - colour dropdownlsit should be shown if i select B,Info - colour dropdownlist should not be shown. how can i achieve this?
Hello,  Thank you for your help,  I am seeing the Red status in the Health Report.  We are using on-prem.  Right now it is showing Yellow, but it frequently flips to Red.  In the Description it says ... See more...
Hello,  Thank you for your help,  I am seeing the Red status in the Health Report.  We are using on-prem.  Right now it is showing Yellow, but it frequently flips to Red.  In the Description it says to look at Root Cause for details, but I can't figure out how to look at "Root Cause"   Thanks again, Tom  
Ok try this   | chatgpt org=Sirt key=Splunk-Mdc    or use "default" for the name of the org and key and then the following will work:   | chatgpt
Hi, The file was placed in a monitored folder from the HF (so through inputs.conf), but even when we tested uploading it via the GUI -like we tested in the dev environment- it still wasn't parsed F... See more...
Hi, The file was placed in a monitored folder from the HF (so through inputs.conf), but even when we tested uploading it via the GUI -like we tested in the dev environment- it still wasn't parsed For the sourcetype, it was a custom one: [Sourcetype_1] BREAK_ONLY_BEFORE_DATE = CHARSET = UTF-8 DATETIME_CONFIG = EVAL-CREATION_DATE = EVAL-DEPT = EVAL-FIRST_NAME = EVAL-FONCTION = EVAL-FULL_NAME = if(match(Name, "(Disabled)"), substr(Name, 1, len(Name)-11), Name) EVAL-LAST_LOGON = replace(Last_Seen, "(\d+)\.(\d+)\.(\d+)", "\3.\2.\1") EVAL-LAST_NAME = EVAL-LOCKED = if(match(Name, "(Disabled)"), "Yes", "No") EVAL-LOCK_REASON = EVAL-LOGIN = Name EVAL-MAIL = Email EVAL-METROID = EVAL-PROFILE = Roles."|".Scope."|".Groups EVAL-PWD_VALID_TO = EVAL-STORE_CODE_5digits = EVAL-USER_IDENTIFICATION = "1 Firstname 1 Name" EVAL-VALID_FROM = EVAL-VALID_TO = EXTRACT-DATE_EXTRACTION = (?i)^.+_(?P<DATE_EXTRACTION>\d{8})\.csv in source EXTRACT-Name,Email,Scope,Last_Seen = EXTRACT-username,type,firstname,lastname,email = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom disabled = false pulldown_type = 1 EXTRACT-Name,Roles,Email,Groups,Language,Agent_Type,Scope,Last_Seen = ^(?<Name>[^;]*);(?<Roles>[^;]*);(?<Email>[^;]*);(?<Groups>[^;]*);(?<Language>[^;]*);(?<Agent_Type>[^;]*);(?<Scope>[^;]*);(?<Last_Seen>[^;]*) #MAX_TIMESTAMP_LOOKAHEAD = 1000 #HEADER_FIELD_LINE_NUMBER = 1 I know the sourcetype isn't clean or anything but why would he work on standalone, and not in distributed environment ?