All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try Splunk webhook action in alert settings. In  Teams you can configure the settings as shown here (To create webhook URL in Teams) : https://learn.microsoft.com/en-us/microsoftteams/platform/webho... See more...
Try Splunk webhook action in alert settings. In  Teams you can configure the settings as shown here (To create webhook URL in Teams) : https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook?tabs=newteams%2Cdotnet
I updated the search:   <search depends="some_other_token"> <query> | mysearch id in $some_other_token$ | head 1 | fields product_id </query> <earliest>-24h@h</... See more...
I updated the search:   <search depends="some_other_token"> <query> | mysearch id in $some_other_token$ | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>60</refresh> <done> <condition match="'job.resultCount'!= 0"> <set token="form.some_token">$result.product_id$</set> </condition> <condition match="'job.resultCount'== 0"> <set token="form.some_token">*</set> </condition> </done> </search>   The all is an option in the following multiselect <input id="select_abc" type="multiselect" token="some_token" searchWhenChanged="true"> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | search to fill dropdown options | fields label, product_id </query> </search> <fieldForLabel>label</fieldForLabel> <fieldForValue>product_id</fieldForValue> <delimiter>,</delimiter> </input> So I want to set the value of the above multiselect (seom_token) on init and when another dropdown (some_other_token) changed. some_other_token is used in the search above.
It is not clear exactly what you are trying to do - is the hex code to be treated as if it is least significant byte first (left-most) and most significant byte second, but the bits in the byte are t... See more...
It is not clear exactly what you are trying to do - is the hex code to be treated as if it is least significant byte first (left-most) and most significant byte second, but the bits in the byte are to be treated as most significant bit first (left-mode). Is that right? Do you just want to know the bit position of the least significant bit?
1. Don't use the _json sourcetype. If needed, create a new one copying settings from the _json. 2. Don't use indexed extractions unless you're absolutely sure what you're doing. 3. Don't edit the _... See more...
1. Don't use the _json sourcetype. If needed, create a new one copying settings from the _json. 2. Don't use indexed extractions unless you're absolutely sure what you're doing. 3. Don't edit the _json sourcetype - it's a builtin sourcetype which shouldn't be used explicitly anyway (see p.1) 4. The count(field) aggregation counts single values so if you have multivalued fields it's normal to have count(field) higher than a general event count. In your case you probably (as @gcusello already pointed out) have multiple occurrences of a "timestamp" field within a single event so it gets parsed as multivalued field and gets counted accordingly.
Unfortunately, third-party addons and their manuals are often... how to say it gently... not written in the best way possible. They are written by people who might be proficient with their respective... See more...
Unfortunately, third-party addons and their manuals are often... how to say it gently... not written in the best way possible. They are written by people who might be proficient with their respective solutions but not necessarily knowledgeable in Splunk. The advised way to get syslog data to Splunk is still using an external syslog daemon which will either write the data to files from which you'll pick up the events with UF and monitor input or which will send the data to Splunk's HEC input. For a small-scale test environment sending directly to Splunk might be relatively OK (when you don't mind the cons of such setup) but you need to create your udp or tcp inputs on high ports (over 1024) when not running Splunk as root.
Hi @Iris_Pi , as I said, you probably have two timestamps for each event, so you could use _time (you probably associated one of the two rimestamps to this field), or you could take the first one fo... See more...
Hi @Iris_Pi , as I said, you probably have two timestamps for each event, so you could use _time (you probably associated one of the two rimestamps to this field), or you could take the first one for each event using mvdedup. Ciao. Giuseppe
I followed this suggestion, but it doesn't work. >>> If you have json field extraction at index time via INDEXED_EXTRACTIONS = JSON You need two additional lines to solve this problem AUTO_K... See more...
I followed this suggestion, but it doesn't work. >>> If you have json field extraction at index time via INDEXED_EXTRACTIONS = JSON You need two additional lines to solve this problem AUTO_KV_JSON = false KV_MODE = none >>>
Hi @Iris_Pi , probably you have more than one timestamp in each event, what if you count the stats using a different field (e.g. _time)? Ciao. Giuseppe
Hello Guys, I've hit a wired problem when uploading a json file. as you can see in the following screenshots, there are only 17790 events, however, when I tried to count the occurrence of the fie... See more...
Hello Guys, I've hit a wired problem when uploading a json file. as you can see in the following screenshots, there are only 17790 events, however, when I tried to count the occurrence of the fields, the number is as twice as the event count.   - example 1 - example 2 The source type I used is _json. Please share your insight here, thank you in advance!
I had a reply from the Splunk Support, it seems that since a while init.d is not supported anymore as mentioned here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/ConfigureSplunktostarta... See more...
I had a reply from the Splunk Support, it seems that since a while init.d is not supported anymore as mentioned here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/ConfigureSplunktostartatboottime   "The init.d boot-start script is not compatible with RHEL 8 and higher. You can instead configure systemd to manage boot start and run splunkd as a service. For more information, see Enable boot start on machines that run systemd."   In fact I have this issue on a Oracle Linux 8 machine.  
Some additional details why it happens here:   https://community.splunk.com/t5/Getting-Data-In/splunk-winprintmon-exe-splunk-winPrintMon-monitorHost/m-p/680673
Here below the reply from Splunk support that contacted the Development team:   Please find the detailed explanation for the issue and the solution:   The logs and your inputs.conf excerpt indica... See more...
Here below the reply from Splunk support that contacted the Development team:   Please find the detailed explanation for the issue and the solution:   The logs and your inputs.conf excerpt indicate that the Splunk Universal Forwarder (UF) is indeed ignoring the interval parameter specified for the WinPrintMon modular input. This is explicitly stated in the log message: > Ignoring parameter "interval" for modular input "WinPrintMon" when scheduling the runtime for script     Why is this happening? The reason behind this behavior lies in how Splunk UF handles script-based modular inputs and their failures.   Script Failures and Retries: When the splunk-winprintmon script fails (in this case, due to the disabled Printer service), Splunk UF doesn't wait for the configured interval to retry. Instead, it attempts to restart the script almost immediately. This rapid retry mechanism is likely designed to ensure quick recovery from transient errors.    Interval Adherence on Success: The interval parameter is only respected when the script completes successfully. In other words, if the splunk-winprintmon script runs without errors, Splunk UF will wait for the specified 600 seconds before executing it again.The combination of script failures and the ignored interval leads to the observed high frequency of error messages in the _internal index (3 times per second). This can potentially overwhelm the Splunk indexer and impact overall system performance.   Solution:  Fix the Script Failure: The primary issue is the disabled Printer service causing the splunk-winprintmon script to fail.    * Enable the Printer service if print monitoring is required.    * Disable the WinPrintMon input in inputs.conf if print monitoring is not needed.   Splunk UF's behavior of ignoring the interval parameter for failing script-based modular inputs is by design. It prioritizes quick recovery from errors over strict adherence to the configured polling interval.   Understanding this behavior is crucial for troubleshooting and optimizing Splunk UF deployments, especially when dealing with inputs that might experience frequent failures. The error 0x800706ba indicates that the "RPC server is unavailable," which is due to the Printer service being disabled.  
Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 B... See more...
Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 Byte 1: HEX = 02 - 0000 0010 Byte 1 Byte0 - 0000 0010 0000 0000 calculate the non zero th position values from right side Byte combination  - 0000 0010 0000 0000 Position -                      9 8765  4321 At position 10, we got 1 while counting from right side. so the bit value is 9. I need to calculate this in splunk, where the HEX_Code is the value from the lookup. Thanks in Advance! Happy Splunking!
Yes @ITWhisperer  I tried, let me check my data again if something missing in events or try with the data you have generated. 
Where is "all" coming from? It is not shown in your source listing. Also, the depends block you have shown is not part of valid SimpleXML. How are you using the latest_product_id token?
What happens when you execute the restart command manually? Do you use the correct user in your ansible script? Maybe you have to set "become: tru"e if splunk runs under root.
Dear @PickleRick  I follow using this step https://www.fortinet.com/content/dam/fortinet/assets/alliances/Fortinet-Splunk-Deployment-Guide.pdf but there's not solve the issue.
Dear @PickleRick  Thankyou for your correction , do you have any sugestion or best practice for me ?   Regards, Dika
OK. This approach is wrong on many levels. 1. Receiving syslog directly on an indexer (or HF or UF) causes data loss whenever you need to restart that Splunk component. 2. When you're receiving sys... See more...
OK. This approach is wrong on many levels. 1. Receiving syslog directly on an indexer (or HF or UF) causes data loss whenever you need to restart that Splunk component. 2. When you're receiving syslog directly on Splunk, you lose at least some of the network-level metadata and you can't use that information to - for example - route events to different indexes or assign them different sourcetypes. Because of that you need to open multiple ports for separate types of sources. Which uses up resources and complicates the setup. 3. In order to receive syslog on a low port (514) Splunk would have to run as root. This is something you should _not_ be doing. Are you sure that input has even opened that port? 4. If you have two indexers (clustered or standalone?) and receive on only one of them, you're asking for data asymmetry.  
I found notfication : File Monitor Input Forwarder Ingestion Latency Ingestion Latency Large and Archive File Reader-0 Large and Archive File Reader-1 Real-time Reader-0 Real-time Reader-1 ... See more...
I found notfication : File Monitor Input Forwarder Ingestion Latency Ingestion Latency Large and Archive File Reader-0 Large and Archive File Reader-1 Real-time Reader-0 Real-time Reader-1 are reds too.  is it because too much logs sent from fortigate?