All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

    | makeresults format=csv data="bit1,bit2 0000,0002 000f,0088 00af,00de 00bd,003c" | fields bit1 bit2 | eval bit1ASnumber=tonumber(bit1,16), bit2ASnumber=tonumber(bit2,16) | eval bit1ASbinary=to... See more...
    | makeresults format=csv data="bit1,bit2 0000,0002 000f,0088 00af,00de 00bd,003c" | fields bit1 bit2 | eval bit1ASnumber=tonumber(bit1,16), bit2ASnumber=tonumber(bit2,16) | eval bit1ASbinary=tostring(bit1ASnumber,"binary"), bit2ASbinary=tostring(bit2ASnumber,"binary") | table bit1 bit2 bit*ASnum* bit*ASbin*     bit1 bit2 bit1ASnumber bit2ASnumber bit1ASbinary bit2ASbinary 0000 0002 0 2 0 10 000f 0088 15 136 1111 10001000 00af 00de 175 222 10101111 11011110 00bd 003c 189 60 10111101 111100 Ok I can get you as far as converting to binary but the results of the binary do not include leading 0's to always make an 8 character string/number.  Since your example had 4 characters for the hex code those values are treated as string so first convert to a number before converting to binary as attempting to go straight will fail when your source as a mix of alphanumeric characters. Obviously without the leading zeros when you concatenate the two values as strings you will lose some positions you need to count.  Also I didn't bother with sorting out the count how many zero's right of the last occurring 1 but essentially that's what comes after inserting your leading zero's
Yes, Studio is not a good choice for anything vaguely sophisticated!
When using dashboard studio currently there is no option for this. dataValuesDisplay ("off" | "all" | "minmax") off Specify whether chart should display no labels, all labels, or only the min ... See more...
When using dashboard studio currently there is no option for this. dataValuesDisplay ("off" | "all" | "minmax") off Specify whether chart should display no labels, all labels, or only the min and max labels. https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/DashStudio/chartsBar I have even tried putting the overlay into the y2 axis and the dataValuesDisplay is a higher level option so impacts all axis. {     "type": "splunk.column",     "dataSources": {         "primary": "ds_TNxdC2O9"     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false,     "title": "Column Chart",     "description": "Overlay Test",     "options": {         "y": "> primary | frameBySeriesNames('regular','_span')",         "y2": "> primary | frameBySeriesNames('overlay','_span')",         "overlayFields": [             "overlay"         ],         "dataValuesDisplay": "all"     },     "context": {} }
Try Splunk webhook action in alert settings. In  Teams you can configure the settings as shown here (To create webhook URL in Teams) : https://learn.microsoft.com/en-us/microsoftteams/platform/webho... See more...
Try Splunk webhook action in alert settings. In  Teams you can configure the settings as shown here (To create webhook URL in Teams) : https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook?tabs=newteams%2Cdotnet
I updated the search:   <search depends="some_other_token"> <query> | mysearch id in $some_other_token$ | head 1 | fields product_id </query> <earliest>-24h@h</... See more...
I updated the search:   <search depends="some_other_token"> <query> | mysearch id in $some_other_token$ | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>60</refresh> <done> <condition match="'job.resultCount'!= 0"> <set token="form.some_token">$result.product_id$</set> </condition> <condition match="'job.resultCount'== 0"> <set token="form.some_token">*</set> </condition> </done> </search>   The all is an option in the following multiselect <input id="select_abc" type="multiselect" token="some_token" searchWhenChanged="true"> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | search to fill dropdown options | fields label, product_id </query> </search> <fieldForLabel>label</fieldForLabel> <fieldForValue>product_id</fieldForValue> <delimiter>,</delimiter> </input> So I want to set the value of the above multiselect (seom_token) on init and when another dropdown (some_other_token) changed. some_other_token is used in the search above.
It is not clear exactly what you are trying to do - is the hex code to be treated as if it is least significant byte first (left-most) and most significant byte second, but the bits in the byte are t... See more...
It is not clear exactly what you are trying to do - is the hex code to be treated as if it is least significant byte first (left-most) and most significant byte second, but the bits in the byte are to be treated as most significant bit first (left-mode). Is that right? Do you just want to know the bit position of the least significant bit?
1. Don't use the _json sourcetype. If needed, create a new one copying settings from the _json. 2. Don't use indexed extractions unless you're absolutely sure what you're doing. 3. Don't edit the _... See more...
1. Don't use the _json sourcetype. If needed, create a new one copying settings from the _json. 2. Don't use indexed extractions unless you're absolutely sure what you're doing. 3. Don't edit the _json sourcetype - it's a builtin sourcetype which shouldn't be used explicitly anyway (see p.1) 4. The count(field) aggregation counts single values so if you have multivalued fields it's normal to have count(field) higher than a general event count. In your case you probably (as @gcusello already pointed out) have multiple occurrences of a "timestamp" field within a single event so it gets parsed as multivalued field and gets counted accordingly.
Unfortunately, third-party addons and their manuals are often... how to say it gently... not written in the best way possible. They are written by people who might be proficient with their respective... See more...
Unfortunately, third-party addons and their manuals are often... how to say it gently... not written in the best way possible. They are written by people who might be proficient with their respective solutions but not necessarily knowledgeable in Splunk. The advised way to get syslog data to Splunk is still using an external syslog daemon which will either write the data to files from which you'll pick up the events with UF and monitor input or which will send the data to Splunk's HEC input. For a small-scale test environment sending directly to Splunk might be relatively OK (when you don't mind the cons of such setup) but you need to create your udp or tcp inputs on high ports (over 1024) when not running Splunk as root.
Hi @Iris_Pi , as I said, you probably have two timestamps for each event, so you could use _time (you probably associated one of the two rimestamps to this field), or you could take the first one fo... See more...
Hi @Iris_Pi , as I said, you probably have two timestamps for each event, so you could use _time (you probably associated one of the two rimestamps to this field), or you could take the first one for each event using mvdedup. Ciao. Giuseppe
I followed this suggestion, but it doesn't work. >>> If you have json field extraction at index time via INDEXED_EXTRACTIONS = JSON You need two additional lines to solve this problem AUTO_K... See more...
I followed this suggestion, but it doesn't work. >>> If you have json field extraction at index time via INDEXED_EXTRACTIONS = JSON You need two additional lines to solve this problem AUTO_KV_JSON = false KV_MODE = none >>>
Hi @Iris_Pi , probably you have more than one timestamp in each event, what if you count the stats using a different field (e.g. _time)? Ciao. Giuseppe
Hello Guys, I've hit a wired problem when uploading a json file. as you can see in the following screenshots, there are only 17790 events, however, when I tried to count the occurrence of the fie... See more...
Hello Guys, I've hit a wired problem when uploading a json file. as you can see in the following screenshots, there are only 17790 events, however, when I tried to count the occurrence of the fields, the number is as twice as the event count.   - example 1 - example 2 The source type I used is _json. Please share your insight here, thank you in advance!
I had a reply from the Splunk Support, it seems that since a while init.d is not supported anymore as mentioned here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/ConfigureSplunktostarta... See more...
I had a reply from the Splunk Support, it seems that since a while init.d is not supported anymore as mentioned here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/ConfigureSplunktostartatboottime   "The init.d boot-start script is not compatible with RHEL 8 and higher. You can instead configure systemd to manage boot start and run splunkd as a service. For more information, see Enable boot start on machines that run systemd."   In fact I have this issue on a Oracle Linux 8 machine.  
Some additional details why it happens here:   https://community.splunk.com/t5/Getting-Data-In/splunk-winprintmon-exe-splunk-winPrintMon-monitorHost/m-p/680673
Here below the reply from Splunk support that contacted the Development team:   Please find the detailed explanation for the issue and the solution:   The logs and your inputs.conf excerpt indica... See more...
Here below the reply from Splunk support that contacted the Development team:   Please find the detailed explanation for the issue and the solution:   The logs and your inputs.conf excerpt indicate that the Splunk Universal Forwarder (UF) is indeed ignoring the interval parameter specified for the WinPrintMon modular input. This is explicitly stated in the log message: > Ignoring parameter "interval" for modular input "WinPrintMon" when scheduling the runtime for script     Why is this happening? The reason behind this behavior lies in how Splunk UF handles script-based modular inputs and their failures.   Script Failures and Retries: When the splunk-winprintmon script fails (in this case, due to the disabled Printer service), Splunk UF doesn't wait for the configured interval to retry. Instead, it attempts to restart the script almost immediately. This rapid retry mechanism is likely designed to ensure quick recovery from transient errors.    Interval Adherence on Success: The interval parameter is only respected when the script completes successfully. In other words, if the splunk-winprintmon script runs without errors, Splunk UF will wait for the specified 600 seconds before executing it again.The combination of script failures and the ignored interval leads to the observed high frequency of error messages in the _internal index (3 times per second). This can potentially overwhelm the Splunk indexer and impact overall system performance.   Solution:  Fix the Script Failure: The primary issue is the disabled Printer service causing the splunk-winprintmon script to fail.    * Enable the Printer service if print monitoring is required.    * Disable the WinPrintMon input in inputs.conf if print monitoring is not needed.   Splunk UF's behavior of ignoring the interval parameter for failing script-based modular inputs is by design. It prioritizes quick recovery from errors over strict adherence to the configured polling interval.   Understanding this behavior is crucial for troubleshooting and optimizing Splunk UF deployments, especially when dealing with inputs that might experience frequent failures. The error 0x800706ba indicates that the "RPC server is unavailable," which is due to the Printer service being disabled.  
Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 B... See more...
Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 Byte 1: HEX = 02 - 0000 0010 Byte 1 Byte0 - 0000 0010 0000 0000 calculate the non zero th position values from right side Byte combination  - 0000 0010 0000 0000 Position -                      9 8765  4321 At position 10, we got 1 while counting from right side. so the bit value is 9. I need to calculate this in splunk, where the HEX_Code is the value from the lookup. Thanks in Advance! Happy Splunking!
Yes @ITWhisperer  I tried, let me check my data again if something missing in events or try with the data you have generated. 
Where is "all" coming from? It is not shown in your source listing. Also, the depends block you have shown is not part of valid SimpleXML. How are you using the latest_product_id token?
What happens when you execute the restart command manually? Do you use the correct user in your ansible script? Maybe you have to set "become: tru"e if splunk runs under root.
Dear @PickleRick  I follow using this step https://www.fortinet.com/content/dam/fortinet/assets/alliances/Fortinet-Splunk-Deployment-Guide.pdf but there's not solve the issue.