All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode ... See more...
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where stoerCode IN ("K02") | stats count as periodCount by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF To explain in detail: After table the following fields are available:  importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode After stats count there are only  zbpIdentifier periodCount left. Question:  How to change the code above to get the count, and have all fields available as before? Thank you for your support.   
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. T... See more...
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. This only started occur after we migrated from On-Prem Splunk to Splunk Cloud. Before, these logs would automatically show up/parsed in JSON
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical b... See more...
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical box, the Web UI is not loading up and getting the below message Waiting for web server at https://127.0.0.1:8000 to be available................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. . Done
I have an event like this:   ~01~20241009-100922;899~19700101-000029;578~ASDF~QWER~YXCV   There are two timestamps in this. I have setup my stanza to extract the second one. But in this particula... See more...
I have an event like this:   ~01~20241009-100922;899~19700101-000029;578~ASDF~QWER~YXCV   There are two timestamps in this. I have setup my stanza to extract the second one. But in this particular case, the second one is what I consider "bad". For the record, here is my props.conf:   [QWERTY] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 43 TIME_FORMAT = %Y%m%d-%H%M%S;%3N TIME_PREFIX = ^\#\d{2}\#.{0,19}\# MAX_DAYS_AGO = 10951 REPORT-1 = some-report-1 REPORT-2 = some-report-2   The consequence of this seems to be that splunk indexes the entire file as a single event, which is something i absolutely want to avoid. Also, I do need to use linemerging as the same file may contain xml dumps. So what I need is something that implements the following logic:   if second_timestamp_is_bad: extract_first_timestamp() else: extract_second_timestamp()   Any tips / hints on how to mitigate this scenario using only options / functionality provided by splunk are greatly appreciated.
Hello everyone!  I'm trying to create a dashboard and set some tokens through javascript. I have some html text inputs and I want that, on the click of a button, they set the corresponding tokens t... See more...
Hello everyone!  I'm trying to create a dashboard and set some tokens through javascript. I have some html text inputs and I want that, on the click of a button, they set the corresponding tokens to the inputted value.  However, when I try to click again the button, the click event doesn't trigger. Can you help me?   require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc) { function setToken(name, value) { mvc.Components.get("default").set(name, value); mvc.Components.get('submitted', {create: true}).set(name, value); } /* ----------------------- */ let prefix = mvc.Components.get("default").get('personal_input_prefix') ?? "personal_"; // Setting tokens for Inputs with prefix ${prefix} $('#personal_submit').on('click', function(e){ e.preventDefault(); console.log("CLICKED"); let input_text = $("input[type=text]"); for (let element of input_text) { let id = element.id; if (id !== undefined && id.startsWith(prefix)){ let value = element.value; setToken(`${id}_token`, value); // <--- document.getElementById(`${id}_token_id`).innerHTML = value; // Set token ${id}_token to value ${value} } } }); });      DASHBOARD EXAMPLE:   <form version="1.1" theme="light" script="test.js"> <label>Dashboard test</label> <row> <panel> <html> <input id="personal_valueA" type="text"/> <input id="personal_valueB" type="text"/> <button id="personal_submit" class="primary-btn">Click</button> <br/> Show: <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueA_token_id">$personal_valueB_token$</p> </html> </panel> </row> </form>    
when ı have upgraded appdynamics controller from 24.7.3 to 24.10 onprem, which one uses from the garbage collector Cms or G1gc?
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I se... See more...
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I see that cluster-config does not have a corresponding mode. ==>edit cluster-config -mode manager|peer|searchhead -<parameter_name> <parameter_value> If it is MC, how should I configure it? It would be even better if best practices could be provided
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[... See more...
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+) It seems to work OK on regex101: https://regex101.com/r/nGdKxQ/5 but fails when trying to parse in Splunk with the following error: Error in 'rex' command: Encountered the following error while compiling the regex 'User\[(?:(?<SignOffDomain>[^\]+)(?:\))?(?<SignOffUsername>[^\]]+)[^\[]+\["(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)': Regex: missing closing parenthesis. Any clue on what I need to escape additionally perhaps? For testing I created the following sample: | makeresults count=2 | streamstats count | eval _raw=if((count%2) == 1, "2025-01-20 08:43:11 Local0 Info 08:43:11:347 HAL-TRT-SN1701 DOMAIN\firstname0.lastname0|4832|TXA HIPAA [1m]HIPAALogging: User[DOMAIN\firstname0.lastname0], Comment[\"Successfully authenticated user with privilege: A_Dummy_Privilege\"], PatientId[PatientIdX], PlanUID[PlanLabel:PlabnLabelX,PlanInstanceUID:PlanInstanceUIDX", "2025-01-20 07:54:42 Local0 Info 07:54:41:911 HAL-TRT-SN1701 domain\firstanme2.lastname2|4832|TXA HIPAA [1m]HIPAALogging: User[firstname1.lastname1], Comment[\"Successfully authenticated user with privilege: AnotherPrivilege\"], PatientId[], PlanUID[], Right[True]") | rex field="_raw" "User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)"  
Can we use Splunk Add-on for AWS for free or it requires a license to be used with Splunk Enterprise free trial?
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /... See more...
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /data/HOT maxVolumeDataSizeMB = 100 [volume:cold] path = /data/COLD maxVolumeDataSizeMB = 100 [lotte] homePath = volume:hot/lotte/db coldPath = volume:cold/lotte/colddb maxDataSize = 1 maxTotalDataSizeMB = 200 thawedPath = $SPLUNK_DB/lotte/thaweddb    
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking o... See more...
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking on Submit button. Will this be possible, please can someone help?
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only ... See more...
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only March data and loss of July results. The impact is significant now, and we hope you can help us check, or if we can implement it in a different way. I use SPL as follows: index=edws sourcetype=edwcsv status="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | eval earliest_time=strftime(earliest_time, "%F 00:00:00") | eval latest_time=strftime(latest_time, "%F 00:00:00") | eval earliest_time=strptime(earliest_time, "%F %T") | eval earliest_time=round(earliest_time) | eval latest_time=strptime(latest_time, "%F %T") | eval latest_time=round(latest_time) | addinfo | table info_min_time info_max_time earliest_time latest_time | eval searchEarliestTime=if(info_min_time == "0.000",earliest_time,info_min_time ) | eval searchLatestTime=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime, searchLatestTime, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | eval alert_date=relative_time(end,"+1d") | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | eval c=strftime(alert_date, "%F") | fields start a end b c | map search="search earliest=\"$start$\" latest=\"$end$\" index=edws sourcetype=edwcsv status="是" | bin _time span=1d | stats dc(_time) as "访问敏感账户次数" by date day name department number | eval a=$a$ | eval b=$b$ | eval c=$c$ | stats sum(访问敏感账户次数) as count,values(day) as "查询日期" by a b c name number department " maxsearches=500000 | where count > 2
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for t... See more...
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for the frozenTimePeriodInsecs parameter. 2) Which is higher in priority between the frozenTimePeriodInsecs parameter of the index and maxTotalDataSizeMB?
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT in... See more...
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h EventCode=xxxx sourcetype="AnonymizedSourceType" NewProcessName IN (*) [| tstats count where index IN(anonymized_index_3, anonymized_index_1, anonymized_index_4, anonymized_index_2) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h idx_EventCode=xxxx sourcetype="AnonymizedSourceType" idx_NewProcessName IN(*) by idx_Field1 _time idx_Field2 host index span=1s | search anonym_ref!="n/a" OR (idx_NewProcessName IN (*placeholder_1*, *placeholder_2*) AND (placeholder_field_1=* OR placeholder_field_2=*)) ]   When I run this SPL, I’ve noticed inconsistent behavior regarding the earliest and latest values. Sometimes the search respects the defined earliest and latest values, but at other times, it completely ignores them and instead uses the time range from the UI time picker. After experimenting, I observed that if I modify the search command to combine the conditions into one single condition instead of having two separate conditions, it seems to work as expected. However, I find this behavior quite strange and inconsistent. I would like to retain the current structure of the search command (with two conditions) but ensure it always respects the defined earliest and latest values. If anyone can identify why this issue occurs or provide suggestions to resolve it while maintaining the current structure, I’d greatly appreciate your input.  
Hi Team, Version: Splunk Enterprise v9.2.1 We are trying to capture user generated data so we have created forms with Classic Dashboard utilising HTML, CSS and JS. Our current approach to capturi... See more...
Hi Team, Version: Splunk Enterprise v9.2.1 We are trying to capture user generated data so we have created forms with Classic Dashboard utilising HTML, CSS and JS. Our current approach to capturing data is outputting everything to a csv file and then import it back into Splunk. Short term and with little data, this isn't a drama and can we display the data how we want to but I can see the long-term issues (unable to update without outputting the whole file again) so we are looking for different ways to capture this.  One option is KV Stores where we can update the specific information that needs changing, but we are also looking at HEC and ingesting the data directly into Splunk. I am not a front-end expert so I have encountered an issue I'm not sure of how to get by. We can use curl after allowing the port through out firewall and that returns success, even though Splunk does not ingest, but I want to do this directly via JS. My dashboard is built using HTML and has a <button>, my JS has an EventListener("click", function) which works as we have been using alerts and console.logs while fault finding. It seems to be failing at the fetch:   const data = { event: "myEvent", index: "myIndex", details: { myDetails } }; fetch("https://myServer:8088/services/collection/event", { method: "POST", headers: { "Authorization": "Splunk myToken", }, body: JSON.stringify(data) })   But we receive the following error:   Uncaught (in promise) TypeError: Failed to fetch at HTMLButtonElement.submit (eval at _runscript (dashboard)), <anonymous>)   Every online search says to check the URL (which is correct) or the token (which is correct). With the Curl not ingesting and the above error, would anyone have any other suggestions as to what the cause might be? p.s. While we are still maturing with Splunk, this dashboard and the JS is being run from a Search Head. Regards, Ben
Hello, I’m trying to tune Machine Learning Toolkit in order to detect authentication abuse on a web portal (based upon Lemon LDAP-NG). My logs look like this: (time/host/... header) client=(IP add... See more...
Hello, I’m trying to tune Machine Learning Toolkit in order to detect authentication abuse on a web portal (based upon Lemon LDAP-NG). My logs look like this: (time/host/... header) client=(IP address) user=(login) sessionID=(session-id) mail=(user email address) action=(various statuses: connected / non-existent user / wrong pwd…)   I would like to train the Machine Learning Toolkit so that I can detect anomalies. Those anomalies can be: - that client has made auth attempts for an unusual number of logins - that client has made auth attempts for both non-existing and existing users - …   So far it fails hard.   I’ve trained a model like this on approx. a month of data:   index="webauth" ( TERM(was) TERM(not) TERM(found) TERM(in) TERM(LDAP) ) OR TERM(connected) OR TERM(credentials) linecount=1 | rex "action=(?<act>.*)" | eval action=case(match(act,".* connected"), "connected", match(act,".* was not found in LDAP directory.*"), "unknown", match(act, ".* credentials"),"wrongpassword") | bin span=1h _time | eventstats dc(user) AS dcUsers, count(user) AS countUsers BY client,_time,action|search dcUsers>1|stats values(dcUsers) AS DCU,values(countUsers) AS CU BY client,_time,action| eval HourOfDay=strftime(_time,"%H") | fit DensityFunction CU by "client,DCU" as outlier into app:TEST     Then I’ve tested the model on another time interval where I know there is a big anomaly, by replacing the fit directive by "apply (model-name) threshold=(various values)". No result.   So I guess I’m not on the right track to achieve this. Any help appreciated!  
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I e... See more...
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I enter the Splunk query in quotes instead of the variable, it does work.   var splQuery = "| makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: splQuery });  
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I... See more...
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I am trying to reformat fields, and in one particular place I would need to ensure that a space preceeds the _h= part in the transform stanza below. [md_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = _h=$1 $0 DEST_KEY = _raw However if I add multiple whitespaces in the FORMAT string, right after the equals sign in the above example, they will be ignored. Should put the whole thing betweem quotes? Wouldn't the quotes be included in the _raw string? What would be the right solution for this?  
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax... See more...
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax of the URL and command to get the result.
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on ... See more...
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page