All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Cccvvveee0235  Great. 
Hello sir! Thank you! Solved this question. It was bitdefender antivirus blocking Splunk, when I am trying to authenticate.
 Hello Team, We are seeing some weirdness when are sending logs to splunk enterprise on-prem. Prior we used to use splunk OTEL Java agent V1 and things were fine, Once the migration to Splunk OTEL J... See more...
 Hello Team, We are seeing some weirdness when are sending logs to splunk enterprise on-prem. Prior we used to use splunk OTEL Java agent V1 and things were fine, Once the migration to Splunk OTEL Java agent V2 was done, we started seeing logs being duplicated like below The below is what started showing up Can you please help how to we stop the kubernetes source? The actual source which we used to observe is as below Please let me know if you need any more information. I would really appreciate any insights into this and arresting the logs from source kubernetes. Thanks!
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for t... See more...
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for the frozenTimePeriodInsecs parameter. 2) Which is higher in priority between the frozenTimePeriodInsecs parameter of the index and maxTotalDataSizeMB?
You know when you're just on autopilot when you type things out, you know that what you've typed is wrong but you still type it anyway..? Yeah... corrected that now, but still the same issue Than... See more...
You know when you're just on autopilot when you type things out, you know that what you've typed is wrong but you still type it anyway..? Yeah... corrected that now, but still the same issue Thank you for pointing that out though, PickleRick, that would've caused another issue down the line. 
OK. Firstly, this is something you should use job inspector for and possibly job logs. You might also want to just run the subsearch on its own and see what results it yields. Having said that, some... See more...
OK. Firstly, this is something you should use job inspector for and possibly job logs. You might also want to just run the subsearch on its own and see what results it yields. Having said that, some remarks: 1. Unless your "index IN (...)" contains some wildcards, the exclusion following it doesn't make sense. 2. "NewProcessName IN (*)" is a very ineffective approach. 3. Your subsearch restults will get rendered as set of OR-ed composite AND conditions. Each of them will have a specific _time value. So while your main search contains earliest/latest, your subsearch will either return empty conditions set or will tell Splunk to search only at specific points in time at full seconds. If your data has sub-second time resolution, you're likely to not match much. 4. Your subsearch will actually not yield any results at all since you're doing tstats count over some fields but then searching through tstats results using a field which is not included in those results. So either you've trimmed your search too much while anonymizing it or it's simply broken.
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT in... See more...
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h EventCode=xxxx sourcetype="AnonymizedSourceType" NewProcessName IN (*) [| tstats count where index IN(anonymized_index_3, anonymized_index_1, anonymized_index_4, anonymized_index_2) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h idx_EventCode=xxxx sourcetype="AnonymizedSourceType" idx_NewProcessName IN(*) by idx_Field1 _time idx_Field2 host index span=1s | search anonym_ref!="n/a" OR (idx_NewProcessName IN (*placeholder_1*, *placeholder_2*) AND (placeholder_field_1=* OR placeholder_field_2=*)) ]   When I run this SPL, I’ve noticed inconsistent behavior regarding the earliest and latest values. Sometimes the search respects the defined earliest and latest values, but at other times, it completely ignores them and instead uses the time range from the UI time picker. After experimenting, I observed that if I modify the search command to combine the conditions into one single condition instead of having two separate conditions, it seems to work as expected. However, I find this behavior quite strange and inconsistent. I would like to retain the current structure of the search command (with two conditions) but ensure it always respects the defined earliest and latest values. If anyone can identify why this issue occurs or provide suggestions to resolve it while maintaining the current structure, I’d greatly appreciate your input.  
This is an example from the Splunk dashboard examples app - (Custom Table Row Expansion) - which shows lazy search string evaluation. https://splunkbase.splunk.com/app/1603  requirejs([ '../app... See more...
This is an example from the Splunk dashboard examples app - (Custom Table Row Expansion) - which shows lazy search string evaluation. https://splunkbase.splunk.com/app/1603  requirejs([ '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(_, TableView, ChartView, SearchManager, mvc) { var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function() { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); this._chartView = new ChartView({ 'managerid': 'details-search-manager', 'charting.legend.placement': 'none' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var sourcetypeCell = _(rowData.cells).find(function (cell) { return cell.field === 'sourcetype'; }); //update the search with the sourcetype that we are interested in this._searchManager.set({ search: 'index=_internal sourcetype=' + sourcetypeCell.value + ' | timechart count' }); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container $container.append(this._chartView.render().el); } }); var tableElement = mvc.Components.getInstance('expand_with_events'); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });  
Even if everything else is OK, you're using wrong endpoint. It's "collector", not "collection".
Looks good! Just be mindful of the difference between the plus (+) operator and the dot (.) operator. Plus concatenates strings and adds numbers, but dot concatenates both strings and numbers as stri... See more...
Looks good! Just be mindful of the difference between the plus (+) operator and the dot (.) operator. Plus concatenates strings and adds numbers, but dot concatenates both strings and numbers as strings. If you're unsure of the order of operations or the variable value in other contexts, you can wrap the variables in the tostring() function. It's interesting that Splunk also assumed a leading space would work in the SC4S transforms: [metadata_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = t="$1$0 DEST_KEY = _raw @PickleRick's suggestion to escape the space with i.e. a blackslash just evaluates to a literal backslash followed by a space.
I don't have much experience with ingest actions but my understanding is that they indeed can be called later in the event's path - on already parsed data. Remember though that they do have limited f... See more...
I don't have much experience with ingest actions but my understanding is that they indeed can be called later in the event's path - on already parsed data. Remember though that they do have limited functionality.
I just tried, but unfortunately this is not working. I'm still running into the same issue where the search is not using the JavaScript variable. In the below code, I even tried "+splQuery+" but noth... See more...
I just tried, but unfortunately this is not working. I'm still running into the same issue where the search is not using the JavaScript variable. In the below code, I even tried "+splQuery+" but nothing.   var splQuery = "| makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: "" }); mysearch.settings.set("search", splQuery);​    
In the end, after struggling with this for several days, I thought to myself, this is leading me nowhere. I wanted to have the fields to follow each other like this: (TIME)(SUBSECOND) (HOST) I had... See more...
In the end, after struggling with this for several days, I thought to myself, this is leading me nowhere. I wanted to have the fields to follow each other like this: (TIME)(SUBSECOND) (HOST) I had the idea to concentrate on adding the whitespace before HOST, and not after TIME or SUBSECOND. This approach had also its problems because spaces at the start of the FORMAT string seem to be ignored, but here I managed to get around that: https://community.splunk.com/t5/Getting-Data-In/Force-inclusion-of-space-character-as-a-first-character-in/m-p/709157 This way I could let go the question of addomg whitespaces conditionally. Though I could not solve this very problem, my overall problam is now solved.
@tscrogginsThanks for the idea. It worked, though I added my own set of modifications to it. As a final touch I would like to put the relevant part of my config here, so as to contribute it back to ... See more...
@tscrogginsThanks for the idea. It worked, though I added my own set of modifications to it. As a final touch I would like to put the relevant part of my config here, so as to contribute it back to the community:   [md_host] INGEST_EVAL = _raw=" _h=".host." "._raw [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond=(\.\d+) FORMAT = $1$0 DEST_KEY = _raw [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1$0 DEST_KEY = _raw  
Hi Team, Version: Splunk Enterprise v9.2.1 We are trying to capture user generated data so we have created forms with Classic Dashboard utilising HTML, CSS and JS. Our current approach to capturi... See more...
Hi Team, Version: Splunk Enterprise v9.2.1 We are trying to capture user generated data so we have created forms with Classic Dashboard utilising HTML, CSS and JS. Our current approach to capturing data is outputting everything to a csv file and then import it back into Splunk. Short term and with little data, this isn't a drama and can we display the data how we want to but I can see the long-term issues (unable to update without outputting the whole file again) so we are looking for different ways to capture this.  One option is KV Stores where we can update the specific information that needs changing, but we are also looking at HEC and ingesting the data directly into Splunk. I am not a front-end expert so I have encountered an issue I'm not sure of how to get by. We can use curl after allowing the port through out firewall and that returns success, even though Splunk does not ingest, but I want to do this directly via JS. My dashboard is built using HTML and has a <button>, my JS has an EventListener("click", function) which works as we have been using alerts and console.logs while fault finding. It seems to be failing at the fetch:   const data = { event: "myEvent", index: "myIndex", details: { myDetails } }; fetch("https://myServer:8088/services/collection/event", { method: "POST", headers: { "Authorization": "Splunk myToken", }, body: JSON.stringify(data) })   But we receive the following error:   Uncaught (in promise) TypeError: Failed to fetch at HTMLButtonElement.submit (eval at _runscript (dashboard)), <anonymous>)   Every online search says to check the URL (which is correct) or the token (which is correct). With the Curl not ingesting and the above error, would anyone have any other suggestions as to what the cause might be? p.s. While we are still maturing with Splunk, this dashboard and the JS is being run from a Search Head. Regards, Ben
Hello, I’m trying to tune Machine Learning Toolkit in order to detect authentication abuse on a web portal (based upon Lemon LDAP-NG). My logs look like this: (time/host/... header) client=(IP add... See more...
Hello, I’m trying to tune Machine Learning Toolkit in order to detect authentication abuse on a web portal (based upon Lemon LDAP-NG). My logs look like this: (time/host/... header) client=(IP address) user=(login) sessionID=(session-id) mail=(user email address) action=(various statuses: connected / non-existent user / wrong pwd…)   I would like to train the Machine Learning Toolkit so that I can detect anomalies. Those anomalies can be: - that client has made auth attempts for an unusual number of logins - that client has made auth attempts for both non-existing and existing users - …   So far it fails hard.   I’ve trained a model like this on approx. a month of data:   index="webauth" ( TERM(was) TERM(not) TERM(found) TERM(in) TERM(LDAP) ) OR TERM(connected) OR TERM(credentials) linecount=1 | rex "action=(?<act>.*)" | eval action=case(match(act,".* connected"), "connected", match(act,".* was not found in LDAP directory.*"), "unknown", match(act, ".* credentials"),"wrongpassword") | bin span=1h _time | eventstats dc(user) AS dcUsers, count(user) AS countUsers BY client,_time,action|search dcUsers>1|stats values(dcUsers) AS DCU,values(countUsers) AS CU BY client,_time,action| eval HourOfDay=strftime(_time,"%H") | fit DensityFunction CU by "client,DCU" as outlier into app:TEST     Then I’ve tested the model on another time interval where I know there is a big anomaly, by replacing the fit directive by "apply (model-name) threshold=(various values)". No result.   So I guess I’m not on the right track to achieve this. Any help appreciated!  
On dev and your own lab you could do almost anything like you are willing, but in production and also test for other you should follow Splunk’s validated architecture https://docs.splunk.com/Documenta... See more...
On dev and your own lab you could do almost anything like you are willing, but in production and also test for other you should follow Splunk’s validated architecture https://docs.splunk.com/Documentation/SVA/current/Architectures/About. Of course you could play how you are implementing LB etc, but I strongly recommend to use external as those are also HA versions.
This was an excellent explanation about raw, cocked and parsed data! One thing to add here. If HF has done ingest action stuff for data then next hf/indexer can manage that data again.
Here is one conf presentation which could help you https://conf.splunk.com/files/2019/slides/FN1570.pdf?_gl=1*1l6tz7s*_gcl_aw*R0NMLjE3MzA4NDM5NTUuRUFJYUlRb2JDaE1JcFpEUWtaakdpUU1WbEZPUkJSMENNZ2YzRUFBWU... See more...
Here is one conf presentation which could help you https://conf.splunk.com/files/2019/slides/FN1570.pdf?_gl=1*1l6tz7s*_gcl_aw*R0NMLjE3MzA4NDM5NTUuRUFJYUlRb2JDaE1JcFpEUWtaakdpUU1WbEZPUkJSMENNZ2YzRUFBWUFpQUFFZ0pVbFBEX0J3RQ..*_gcl_au*MTIyNzAyMTUzNC4xNzMxNTA5NTYz*FPAU*MTIyNzAyMTUzNC4xNzMxNTA5NTYz*_ga*MTM3Njk1MTUzMy4xNzIzMzcyODc0*_ga_5EPM2P39FV*MTczNzI4NDk0Ny4yMjYuMS4xNzM3Mjg1MTE2LjAuMC4xMTE5MjkyNjg2*_fplc*eW9LSGpxJTJGQjlXVm8zVzk5UTZwNHB5ZkEwcW96TjdlaURYUzF1RkY3d0tORVlQaXQ2N2pSaU9aUzNqQXhvVUQ2SkpJcFB6JTJGSHZiZlJyOXE5dFJHUVlvMmRiZmM4a0FYTzlvRGVqUkgyV3hDOEthY3Y1Y1c5SWJEWUNMaVBadyUzRCUzRA.. But as @PickleRick said there could be many reasons behind that issue.
Cased a case with Splunk, they acknowledged it and replied that they are aware of it and it a limitation which will be fix in upcoming release since it requires code level changes and there is not wo... See more...
Cased a case with Splunk, they acknowledged it and replied that they are aware of it and it a limitation which will be fix in upcoming release since it requires code level changes and there is not workaround for it for now.