All Topics

Top

All Topics

Hello everyone!  I'm trying to create a dashboard and set some tokens through javascript. I have some html text inputs and I want that, on the click of a button, they set the corresponding tokens t... See more...
Hello everyone!  I'm trying to create a dashboard and set some tokens through javascript. I have some html text inputs and I want that, on the click of a button, they set the corresponding tokens to the inputted value.  However, when I try to click again the button, the click event doesn't trigger. Can you help me?   require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc) { function setToken(name, value) { mvc.Components.get("default").set(name, value); mvc.Components.get('submitted', {create: true}).set(name, value); } /* ----------------------- */ let prefix = mvc.Components.get("default").get('personal_input_prefix') ?? "personal_"; // Setting tokens for Inputs with prefix ${prefix} $('#personal_submit').on('click', function(e){ e.preventDefault(); console.log("CLICKED"); let input_text = $("input[type=text]"); for (let element of input_text) { let id = element.id; if (id !== undefined && id.startsWith(prefix)){ let value = element.value; setToken(`${id}_token`, value); // <--- document.getElementById(`${id}_token_id`).innerHTML = value; // Set token ${id}_token to value ${value} } } }); });      DASHBOARD EXAMPLE:   <form version="1.1" theme="light" script="test.js"> <label>Dashboard test</label> <row> <panel> <html> <input id="personal_valueA" type="text"/> <input id="personal_valueB" type="text"/> <button id="personal_submit" class="primary-btn">Click</button> <br/> Show: <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueA_token_id">$personal_valueB_token$</p> </html> </panel> </row> </form>    
when ı have upgraded appdynamics controller from 24.7.3 to 24.10 onprem, which one uses from the garbage collector Cms or G1gc?
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I se... See more...
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I see that cluster-config does not have a corresponding mode. ==>edit cluster-config -mode manager|peer|searchhead -<parameter_name> <parameter_value> If it is MC, how should I configure it? It would be even better if best practices could be provided
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[... See more...
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+) It seems to work OK on regex101: https://regex101.com/r/nGdKxQ/5 but fails when trying to parse in Splunk with the following error: Error in 'rex' command: Encountered the following error while compiling the regex 'User\[(?:(?<SignOffDomain>[^\]+)(?:\))?(?<SignOffUsername>[^\]]+)[^\[]+\["(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)': Regex: missing closing parenthesis. Any clue on what I need to escape additionally perhaps? For testing I created the following sample: | makeresults count=2 | streamstats count | eval _raw=if((count%2) == 1, "2025-01-20 08:43:11 Local0 Info 08:43:11:347 HAL-TRT-SN1701 DOMAIN\firstname0.lastname0|4832|TXA HIPAA [1m]HIPAALogging: User[DOMAIN\firstname0.lastname0], Comment[\"Successfully authenticated user with privilege: A_Dummy_Privilege\"], PatientId[PatientIdX], PlanUID[PlanLabel:PlabnLabelX,PlanInstanceUID:PlanInstanceUIDX", "2025-01-20 07:54:42 Local0 Info 07:54:41:911 HAL-TRT-SN1701 domain\firstanme2.lastname2|4832|TXA HIPAA [1m]HIPAALogging: User[firstname1.lastname1], Comment[\"Successfully authenticated user with privilege: AnotherPrivilege\"], PatientId[], PlanUID[], Right[True]") | rex field="_raw" "User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)"  
Can we use Splunk Add-on for AWS for free or it requires a license to be used with Splunk Enterprise free trial?
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /... See more...
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /data/HOT maxVolumeDataSizeMB = 100 [volume:cold] path = /data/COLD maxVolumeDataSizeMB = 100 [lotte] homePath = volume:hot/lotte/db coldPath = volume:cold/lotte/colddb maxDataSize = 1 maxTotalDataSizeMB = 200 thawedPath = $SPLUNK_DB/lotte/thaweddb    
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking o... See more...
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking on Submit button. Will this be possible, please can someone help?
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only ... See more...
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only March data and loss of July results. The impact is significant now, and we hope you can help us check, or if we can implement it in a different way. I use SPL as follows: index=edws sourcetype=edwcsv status="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | eval earliest_time=strftime(earliest_time, "%F 00:00:00") | eval latest_time=strftime(latest_time, "%F 00:00:00") | eval earliest_time=strptime(earliest_time, "%F %T") | eval earliest_time=round(earliest_time) | eval latest_time=strptime(latest_time, "%F %T") | eval latest_time=round(latest_time) | addinfo | table info_min_time info_max_time earliest_time latest_time | eval searchEarliestTime=if(info_min_time == "0.000",earliest_time,info_min_time ) | eval searchLatestTime=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime, searchLatestTime, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | eval alert_date=relative_time(end,"+1d") | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | eval c=strftime(alert_date, "%F") | fields start a end b c | map search="search earliest=\"$start$\" latest=\"$end$\" index=edws sourcetype=edwcsv status="是" | bin _time span=1d | stats dc(_time) as "访问敏感账户次数" by date day name department number | eval a=$a$ | eval b=$b$ | eval c=$c$ | stats sum(访问敏感账户次数) as count,values(day) as "查询日期" by a b c name number department " maxsearches=500000 | where count > 2
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for t... See more...
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for the frozenTimePeriodInsecs parameter. 2) Which is higher in priority between the frozenTimePeriodInsecs parameter of the index and maxTotalDataSizeMB?
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT in... See more...
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h EventCode=xxxx sourcetype="AnonymizedSourceType" NewProcessName IN (*) [| tstats count where index IN(anonymized_index_3, anonymized_index_1, anonymized_index_4, anonymized_index_2) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h idx_EventCode=xxxx sourcetype="AnonymizedSourceType" idx_NewProcessName IN(*) by idx_Field1 _time idx_Field2 host index span=1s | search anonym_ref!="n/a" OR (idx_NewProcessName IN (*placeholder_1*, *placeholder_2*) AND (placeholder_field_1=* OR placeholder_field_2=*)) ]   When I run this SPL, I’ve noticed inconsistent behavior regarding the earliest and latest values. Sometimes the search respects the defined earliest and latest values, but at other times, it completely ignores them and instead uses the time range from the UI time picker. After experimenting, I observed that if I modify the search command to combine the conditions into one single condition instead of having two separate conditions, it seems to work as expected. However, I find this behavior quite strange and inconsistent. I would like to retain the current structure of the search command (with two conditions) but ensure it always respects the defined earliest and latest values. If anyone can identify why this issue occurs or provide suggestions to resolve it while maintaining the current structure, I’d greatly appreciate your input.  
Hi Team, Version: Splunk Enterprise v9.2.1 We are trying to capture user generated data so we have created forms with Classic Dashboard utilising HTML, CSS and JS. Our current approach to capturi... See more...
Hi Team, Version: Splunk Enterprise v9.2.1 We are trying to capture user generated data so we have created forms with Classic Dashboard utilising HTML, CSS and JS. Our current approach to capturing data is outputting everything to a csv file and then import it back into Splunk. Short term and with little data, this isn't a drama and can we display the data how we want to but I can see the long-term issues (unable to update without outputting the whole file again) so we are looking for different ways to capture this.  One option is KV Stores where we can update the specific information that needs changing, but we are also looking at HEC and ingesting the data directly into Splunk. I am not a front-end expert so I have encountered an issue I'm not sure of how to get by. We can use curl after allowing the port through out firewall and that returns success, even though Splunk does not ingest, but I want to do this directly via JS. My dashboard is built using HTML and has a <button>, my JS has an EventListener("click", function) which works as we have been using alerts and console.logs while fault finding. It seems to be failing at the fetch:   const data = { event: "myEvent", index: "myIndex", details: { myDetails } }; fetch("https://myServer:8088/services/collection/event", { method: "POST", headers: { "Authorization": "Splunk myToken", }, body: JSON.stringify(data) })   But we receive the following error:   Uncaught (in promise) TypeError: Failed to fetch at HTMLButtonElement.submit (eval at _runscript (dashboard)), <anonymous>)   Every online search says to check the URL (which is correct) or the token (which is correct). With the Curl not ingesting and the above error, would anyone have any other suggestions as to what the cause might be? p.s. While we are still maturing with Splunk, this dashboard and the JS is being run from a Search Head. Regards, Ben
Hello, I’m trying to tune Machine Learning Toolkit in order to detect authentication abuse on a web portal (based upon Lemon LDAP-NG). My logs look like this: (time/host/... header) client=(IP add... See more...
Hello, I’m trying to tune Machine Learning Toolkit in order to detect authentication abuse on a web portal (based upon Lemon LDAP-NG). My logs look like this: (time/host/... header) client=(IP address) user=(login) sessionID=(session-id) mail=(user email address) action=(various statuses: connected / non-existent user / wrong pwd…)   I would like to train the Machine Learning Toolkit so that I can detect anomalies. Those anomalies can be: - that client has made auth attempts for an unusual number of logins - that client has made auth attempts for both non-existing and existing users - …   So far it fails hard.   I’ve trained a model like this on approx. a month of data:   index="webauth" ( TERM(was) TERM(not) TERM(found) TERM(in) TERM(LDAP) ) OR TERM(connected) OR TERM(credentials) linecount=1 | rex "action=(?<act>.*)" | eval action=case(match(act,".* connected"), "connected", match(act,".* was not found in LDAP directory.*"), "unknown", match(act, ".* credentials"),"wrongpassword") | bin span=1h _time | eventstats dc(user) AS dcUsers, count(user) AS countUsers BY client,_time,action|search dcUsers>1|stats values(dcUsers) AS DCU,values(countUsers) AS CU BY client,_time,action| eval HourOfDay=strftime(_time,"%H") | fit DensityFunction CU by "client,DCU" as outlier into app:TEST     Then I’ve tested the model on another time interval where I know there is a big anomaly, by replacing the fit directive by "apply (model-name) threshold=(various values)". No result.   So I guess I’m not on the right track to achieve this. Any help appreciated!  
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I e... See more...
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I enter the Splunk query in quotes instead of the variable, it does work.   var splQuery = "| makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: splQuery });  
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I... See more...
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I am trying to reformat fields, and in one particular place I would need to ensure that a space preceeds the _h= part in the transform stanza below. [md_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = _h=$1 $0 DEST_KEY = _raw However if I add multiple whitespaces in the FORMAT string, right after the equals sign in the above example, they will be ignored. Should put the whole thing betweem quotes? Wouldn't the quotes be included in the _raw string? What would be the right solution for this?  
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax... See more...
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax of the URL and command to get the result.
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on ... See more...
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page       
At .conf24, we shared that we were in the process of integrating Cisco Talos threat intelligence into Splunk Enterprise Security, Splunk SOAR, and Splunk Attack Analyzer. We know just how eager the c... See more...
At .conf24, we shared that we were in the process of integrating Cisco Talos threat intelligence into Splunk Enterprise Security, Splunk SOAR, and Splunk Attack Analyzer. We know just how eager the community has been to see these integrations come to fruition, so we’re thrilled to share that all of the integrations are live! Now, Splunk Security (cloud) customers can directly leverage Cisco Talos’ invaluable threat intelligence through Cisco Talos Intelligence for Enterprise Security, the Cisco Talos Intelligence connector for Splunk SOAR, and as a globally enabled feature in Splunk Attack Analyzer — at no additional cost. To learn more, read our blog “Harness the Power of Cisco Talos Threat Intelligence Across Splunk Security Products” and then check out the following: Cisco Talos Intelligence for Enterprise Security: Current Splunk Enterprise Security (cloud) customers can download the Cisco Talos Intelligence for Enterprise Security app from Splunkbase here and find additional guidance on leveraging the app’s capabilities here. Cisco Talos Intelligence connector for Splunk SOAR: The Cisco Talos Intelligence connector for Splunk SOAR is now pre-installed for all current Splunk SOAR (cloud) customers. Additional guidance on leveraging the connector’s capabilities is available here. Cisco Talos Intelligence in Splunk Attack Analyzer: These capabilities are globally enabled for all Splunk Attack Analyzer customers and don’t require any extra apps, connectors, or configuration. Check out this blog for additional details.
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two pl... See more...
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two places? if so, what should we do to produce cooked data AND forward it to the indexer? 
We have a case where we can search and find events that match the search criteria. The client would like to see the events that are prior in time to the one that we matched via the SPL. Can we do that?
Is there a rest api available for Notable Suppression ? to get the suppresssion details and modify them via rest api