All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://www.splunk.com/en_us/blog/platform/splunking-your-conf-files-how-to-audit-configuration-changes-like-a-boss.html?locale=en_us This will show you how to track conf file changes.  Earlier ques... See more...
https://www.splunk.com/en_us/blog/platform/splunking-your-conf-files-how-to-audit-configuration-changes-like-a-boss.html?locale=en_us This will show you how to track conf file changes.  Earlier questions wanted to change control on dashboards which are xml files so it wont work for those.
Usually, a search is skipped because of external factors (no resources), but a search will be skipped if it contains an error or if it's already/still running.  The Monitoring Console or CMC will tel... See more...
Usually, a search is skipped because of external factors (no resources), but a search will be skipped if it contains an error or if it's already/still running.  The Monitoring Console or CMC will tell the reason for the skips in the Scheduler Activity dashboard.  Fix the reason and the skips should stop.
This workaround does work for us: %SPLUNK_HOME%\etc\apps\introspection_generator_addon\local\server.conf [introspection:generator:resource_usage] disabled = true acquireExtra_i_data = false
Hi Team, We're getting skipped search alerts for all 3  Lookup Gen searches. How we can resolve this? Even though after disabling those searches we are still getting error for these searches. Your... See more...
Hi Team, We're getting skipped search alerts for all 3  Lookup Gen searches. How we can resolve this? Even though after disabling those searches we are still getting error for these searches. Your assistance is greatly appreciated. Lookup Gen - bh_sourcetype_cache broken_hosts Relation '' is unknown. (2) none 2 Lookup Gen - bh_host_cache broken_hosts Relation '' is unknown. (2) none 2 Lookup Gen - bh_index_cache broken_hosts Relation '' is unknown. (2) none
OK. This is confusing. You have four hexadecimal digits but they're little-endian so the resulting bit order (not byte, mind you; you're happily using the same word for both bits and bytes). But what... See more...
OK. This is confusing. You have four hexadecimal digits but they're little-endian so the resulting bit order (not byte, mind you; you're happily using the same word for both bits and bytes). But what the calculation should be based on? What do you want to achieve? You showed only one example which is a power of 2 so it gives you just one set bit in your whole 16-bit sequence. But what if you had 0x63 0x3A? 0x63 is 01100011, 0x3A is 00111010 As this is little-endian, the resulting bit-stream would be 00111010 01100011 And what now? You want the position of first non-zero bit from the right? And what does it have to do with "lookup"?
I've successfully uploaded and installed a private app to Cloud. The app simply contains a few javascript-based utilities which are located in, e.g.: common_ui_util/appserver/static/js/close_div.js ... See more...
I've successfully uploaded and installed a private app to Cloud. The app simply contains a few javascript-based utilities which are located in, e.g.: common_ui_util/appserver/static/js/close_div.js I'm hoping to use these in the same way that I'm able to from an enterprise install where, from any other app context, I'm able to include the javascript in the simple XML, e.g.:  <form version="1.1" script="common_ui_util:/js/close_div.js"> However, this isn't working for me in Cloud, and the console shows the script as 404. The console shows the path as: https://<subdomain>.splunkcloud.com/en-US/static/@<id>/app/common_ui_util//js/close_div.js   I've verified that the app is installed and that I have set permissions to read for everyone, and exported globally. Common UI Utilities common_ui_util 1.0.0 No No Global | Permissions Enabled | Disable Uninstall | Edit properties | View objects   What am I missing here?
Hi Team is there any information of when Compliance Essentials will be updated to support CMMC version 2.0, from my understanding it continues to only support 1.0 and this is impacting customers from... See more...
Hi Team is there any information of when Compliance Essentials will be updated to support CMMC version 2.0, from my understanding it continues to only support 1.0 and this is impacting customers from considering Splunk platform for their environment as there are specific needs around using Splunk platform to address CMMC compliance.     
    | makeresults format=csv data="bit1,bit2 0000,0002 000f,0088 00af,00de 00bd,003c" | fields bit1 bit2 | eval bit1ASnumber=tonumber(bit1,16), bit2ASnumber=tonumber(bit2,16) | eval bit1ASbinary=to... See more...
    | makeresults format=csv data="bit1,bit2 0000,0002 000f,0088 00af,00de 00bd,003c" | fields bit1 bit2 | eval bit1ASnumber=tonumber(bit1,16), bit2ASnumber=tonumber(bit2,16) | eval bit1ASbinary=tostring(bit1ASnumber,"binary"), bit2ASbinary=tostring(bit2ASnumber,"binary") | table bit1 bit2 bit*ASnum* bit*ASbin*     bit1 bit2 bit1ASnumber bit2ASnumber bit1ASbinary bit2ASbinary 0000 0002 0 2 0 10 000f 0088 15 136 1111 10001000 00af 00de 175 222 10101111 11011110 00bd 003c 189 60 10111101 111100 Ok I can get you as far as converting to binary but the results of the binary do not include leading 0's to always make an 8 character string/number.  Since your example had 4 characters for the hex code those values are treated as string so first convert to a number before converting to binary as attempting to go straight will fail when your source as a mix of alphanumeric characters. Obviously without the leading zeros when you concatenate the two values as strings you will lose some positions you need to count.  Also I didn't bother with sorting out the count how many zero's right of the last occurring 1 but essentially that's what comes after inserting your leading zero's
Yes, Studio is not a good choice for anything vaguely sophisticated!
When using dashboard studio currently there is no option for this. dataValuesDisplay ("off" | "all" | "minmax") off Specify whether chart should display no labels, all labels, or only the min ... See more...
When using dashboard studio currently there is no option for this. dataValuesDisplay ("off" | "all" | "minmax") off Specify whether chart should display no labels, all labels, or only the min and max labels. https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/DashStudio/chartsBar I have even tried putting the overlay into the y2 axis and the dataValuesDisplay is a higher level option so impacts all axis. {     "type": "splunk.column",     "dataSources": {         "primary": "ds_TNxdC2O9"     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false,     "title": "Column Chart",     "description": "Overlay Test",     "options": {         "y": "> primary | frameBySeriesNames('regular','_span')",         "y2": "> primary | frameBySeriesNames('overlay','_span')",         "overlayFields": [             "overlay"         ],         "dataValuesDisplay": "all"     },     "context": {} }
Try Splunk webhook action in alert settings. In  Teams you can configure the settings as shown here (To create webhook URL in Teams) : https://learn.microsoft.com/en-us/microsoftteams/platform/webho... See more...
Try Splunk webhook action in alert settings. In  Teams you can configure the settings as shown here (To create webhook URL in Teams) : https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook?tabs=newteams%2Cdotnet
I updated the search:   <search depends="some_other_token"> <query> | mysearch id in $some_other_token$ | head 1 | fields product_id </query> <earliest>-24h@h</... See more...
I updated the search:   <search depends="some_other_token"> <query> | mysearch id in $some_other_token$ | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>60</refresh> <done> <condition match="'job.resultCount'!= 0"> <set token="form.some_token">$result.product_id$</set> </condition> <condition match="'job.resultCount'== 0"> <set token="form.some_token">*</set> </condition> </done> </search>   The all is an option in the following multiselect <input id="select_abc" type="multiselect" token="some_token" searchWhenChanged="true"> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | search to fill dropdown options | fields label, product_id </query> </search> <fieldForLabel>label</fieldForLabel> <fieldForValue>product_id</fieldForValue> <delimiter>,</delimiter> </input> So I want to set the value of the above multiselect (seom_token) on init and when another dropdown (some_other_token) changed. some_other_token is used in the search above.
It is not clear exactly what you are trying to do - is the hex code to be treated as if it is least significant byte first (left-most) and most significant byte second, but the bits in the byte are t... See more...
It is not clear exactly what you are trying to do - is the hex code to be treated as if it is least significant byte first (left-most) and most significant byte second, but the bits in the byte are to be treated as most significant bit first (left-mode). Is that right? Do you just want to know the bit position of the least significant bit?
1. Don't use the _json sourcetype. If needed, create a new one copying settings from the _json. 2. Don't use indexed extractions unless you're absolutely sure what you're doing. 3. Don't edit the _... See more...
1. Don't use the _json sourcetype. If needed, create a new one copying settings from the _json. 2. Don't use indexed extractions unless you're absolutely sure what you're doing. 3. Don't edit the _json sourcetype - it's a builtin sourcetype which shouldn't be used explicitly anyway (see p.1) 4. The count(field) aggregation counts single values so if you have multivalued fields it's normal to have count(field) higher than a general event count. In your case you probably (as @gcusello already pointed out) have multiple occurrences of a "timestamp" field within a single event so it gets parsed as multivalued field and gets counted accordingly.
Unfortunately, third-party addons and their manuals are often... how to say it gently... not written in the best way possible. They are written by people who might be proficient with their respective... See more...
Unfortunately, third-party addons and their manuals are often... how to say it gently... not written in the best way possible. They are written by people who might be proficient with their respective solutions but not necessarily knowledgeable in Splunk. The advised way to get syslog data to Splunk is still using an external syslog daemon which will either write the data to files from which you'll pick up the events with UF and monitor input or which will send the data to Splunk's HEC input. For a small-scale test environment sending directly to Splunk might be relatively OK (when you don't mind the cons of such setup) but you need to create your udp or tcp inputs on high ports (over 1024) when not running Splunk as root.
Hi @Iris_Pi , as I said, you probably have two timestamps for each event, so you could use _time (you probably associated one of the two rimestamps to this field), or you could take the first one fo... See more...
Hi @Iris_Pi , as I said, you probably have two timestamps for each event, so you could use _time (you probably associated one of the two rimestamps to this field), or you could take the first one for each event using mvdedup. Ciao. Giuseppe
I followed this suggestion, but it doesn't work. >>> If you have json field extraction at index time via INDEXED_EXTRACTIONS = JSON You need two additional lines to solve this problem AUTO_K... See more...
I followed this suggestion, but it doesn't work. >>> If you have json field extraction at index time via INDEXED_EXTRACTIONS = JSON You need two additional lines to solve this problem AUTO_KV_JSON = false KV_MODE = none >>>
Hi @Iris_Pi , probably you have more than one timestamp in each event, what if you count the stats using a different field (e.g. _time)? Ciao. Giuseppe
Hello Guys, I've hit a wired problem when uploading a json file. as you can see in the following screenshots, there are only 17790 events, however, when I tried to count the occurrence of the fie... See more...
Hello Guys, I've hit a wired problem when uploading a json file. as you can see in the following screenshots, there are only 17790 events, however, when I tried to count the occurrence of the fields, the number is as twice as the event count.   - example 1 - example 2 The source type I used is _json. Please share your insight here, thank you in advance!
I had a reply from the Splunk Support, it seems that since a while init.d is not supported anymore as mentioned here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/ConfigureSplunktostarta... See more...
I had a reply from the Splunk Support, it seems that since a while init.d is not supported anymore as mentioned here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/ConfigureSplunktostartatboottime   "The init.d boot-start script is not compatible with RHEL 8 and higher. You can instead configure systemd to manage boot start and run splunkd as a service. For more information, see Enable boot start on machines that run systemd."   In fact I have this issue on a Oracle Linux 8 machine.