All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Assuming component1 is a subset of component2 (which you seem to be implying) | eval component=coalesce(component1, component2) | stats count by component | where count=1
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table compon... See more...
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table component1 component2 these are the 2 tables. I want to show the extra data which are in component2 and not in component1. How can i do it?
Hi @PickleRick  I am using this endpoint "/services/collector" And how I can explicitly use timestamp format in the endpoint ?  
Which HEC endpoint are you sending your data to? If you are using the /event endpoint if you don't explicitly set ?auto_extract_timestamp=true whatever settings you have in your props, they are _not_... See more...
Which HEC endpoint are you sending your data to? If you are using the /event endpoint if you don't explicitly set ?auto_extract_timestamp=true whatever settings you have in your props, they are _not_ applied and the timestamp must be specified explicitly along the event or is taken from the current timestamp on the receiver.
I don't think it's actually docummented anywhere since it's not normally meant for users to fiddle with. And I would strongly advise against trying to do that. I'd probably not want to do such thing... See more...
I don't think it's actually docummented anywhere since it's not normally meant for users to fiddle with. And I would strongly advise against trying to do that. I'd probably not want to do such thing myself in production environment. In a lab setup just for fun and to see how stuff works - sure, why not. But in prod? Hell, no. It's not about HF _sending_ data. It's about re-parsing incoming already parsed data (and additionally, this particular HF would need to actually _not_ send data anywhere else, just export it to syslog; it's actually a waste of resources I think).
1. OK. It's just that I'd probably just cut the whole "This is an example" line if it's just a constant delimiter between the events. 2. Where? And what does your ingestion process look like? 3. LI... See more...
1. OK. It's just that I'd probably just cut the whole "This is an example" line if it's just a constant delimiter between the events. 2. Where? And what does your ingestion process look like? 3. LINE_BREAKER is not defined at input level. It's defined in props but I'm assuming you meant "splunk btool props list", not inputs. If not, check props, not inputs. BREAK_ONLY_BEFORE is a setting used only when SHOULD_LINEMERGE is set to true and that case is best avoided (there are very very rare cases where it makes sense; if possible, avoid enabling line-merging)
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apa... See more...
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apache/2.4.6 and  server stats is working (http://localhost:8080/server-status?auto) I have referenced the follow document  and update the config file /etc/otle/collector/agent_config.yaml but I did not get any metrics about Apache !! https://docs.splunk.com/observability/en/gdi/opentelemetry/components/apache-receiver.html https://docs.splunk.com/observability/en/gdi/monitors-hosts/apache-httpserver.html Anybody kindly do me a favor to fix it thanks in adeavne #observability    
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicate... See more...
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicates a potential discrepancy in the timestamp parsing or configuration when handling live data. Could you please suggest me potential reson and cause? Additionally, it would be helpful to review the relevant props.conf configurations to ensure consistency   Sample data: {"@timestamp":"2024-11-19T12:53:16.5310804+00:00","event":{"action":"log","code":"10010","kind":"event","original":"Communication session on line {1:d}, lost.","context":{"parameter1":"12","parameter2":"2","parameter3":"6","parameter4":"0","physical_line":"12","connected_unit_type_code":"2","connect_logical_unit_number":"6","description":"A User Event message will be generated each time a communication link is lost. This message can be used to detect that an external unit no longer is connected.\nPossible Unit Type codes:\n2 Debug line\n3 ACI line\n4 CWay line","severity":"Info","vehicle_index":"0","unit_type":"NT8000","location":"0","physical_module_id":"0","event_type":"UserEvent","software_module_id":"26"}},"service":{"address":"localhost:50005","name":"Eventlog"},"agent":{"name":"ACI.SystemManager","type":"ACI SystemManager Collector","version":"3.3.0.0"},"project":{"id":"fleet_move_af_sim"},"ecs.version":"8.1.0"} Current props: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom #KV_MODE = json pulldown_type = 1 TIME_PREFIX = \"@timestamp\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%7N%:z mismatch timestamp Current results :   Note : I am using http event collector token to get the data into Splunk. Inputs and props settings are arranged under search app.  
Hi @winter4 , metadata are associated to Splunk, so you can maintain them only in Splunk, you cannot maintain them in a syslog to an external third party. So, your Indexer will receive logs with me... See more...
Hi @winter4 , metadata are associated to Splunk, so you can maintain them only in Splunk, you cannot maintain them in a syslog to an external third party. So, your Indexer will receive logs with metadata, instead the third party will receive logs without metadata. About metadata: sourcetype is a metadata of Splunk so it isn't relevant for a third party. host is usually present at the beginning of the syslog and the third party should only extract it. source is a metadata that you lose sending syslogs to a third party. Ciao. Giuseppe
@timgren  Just remove  util/console  and console form the require block. like require(['jquery', 'underscore', 'splunkjs/mvc'], function($, _, mvc) {   You can directly use console  object in yo... See more...
@timgren  Just remove  util/console  and console form the require block. like require(['jquery', 'underscore', 'splunkjs/mvc'], function($, _, mvc) {   You can directly use console  object in your JS code.   Full code. require(['jquery', 'underscore', 'splunkjs/mvc'], function($, _, mvc) { console.log("hieeeee"); function setToken(name, value) { var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } // Main $('.dashboard-body').on('click', '[data-on-class],[data-off-class],[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); console.log("Inside the click"); var target = $(e.currentTarget); console.log("here"); console.log("target.data('on-class')=" + target.data('on-class')); var cssOnClass= target.data('on-class'); var cssOffClass = target.data('off-class'); if (cssOnClass) { $("." + cssOnClass).attr('class', cssOffClass); target.attr('class', cssOnClass); } var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { var tokens = unsetTokenName.split(","); var arrayLength = tokens.length; for (var i = 0; i < arrayLength; i++) { setToken(tokens[i], undefined); //Do something } //setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); }); I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.
Hi, we are using a Splunk Cloud ES and we can't seem to edit the base search macro of the "Alerts" datamodel. The macro in question is, " cim_Alerts_indexes" and it appears it has an extra parameter ... See more...
Hi, we are using a Splunk Cloud ES and we can't seem to edit the base search macro of the "Alerts" datamodel. The macro in question is, " cim_Alerts_indexes" and it appears it has an extra parameter which generates an error when this macro is ran manually. Error: "Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the right hand side" And that is due to the fact that the macro SPL is set up as follows:   (index=(index=azure_security sourcetype="GraphSecurityAlert") OR (index=trendmicro))     The extra "index=" in the beginning is what's messing it up. It should be removed. However, when we try to go to Settings -> Advanced Search and click on this macro, we are taken to the CIM Setup interface (Splunk_SA_CIM) which shows the config settings of the macro, including the:   Indexes whitelist = azure_security,trendmicro Tags whitelist = cloud, pci   Notice, the editable configs do not include the definition which is:   (index=(index=azure_security sourcetype="GraphSecurityAlert") OR (index=trendmicro))     So can anyone assist how we can correct this? Regards  
Hi @PickleRick  Do you have any documentation detailing the hack that you are thinking of.  Or do you have a sample of the configs I can input on the HF to get splunk to send that data. Any hel... See more...
Hi @PickleRick  Do you have any documentation detailing the hack that you are thinking of.  Or do you have a sample of the configs I can input on the HF to get splunk to send that data. Any help will be greatly appreciated and serve as a good starting point.  Thanks!
Hello, im really lost here and NEED help. What exactly did you configure and where? because no matter what u try i can't get it to save   
Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of o... See more...
Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of our Splunk Infrastructure from Amazon Linux 2 (kernel 5.10.x) to Amazon Linux 2023 (kernel 6.1.x)  due to the approaching Operating System end of life.  Does anyone know if there are plans to support the new Amazon OS by Splunk Enterprise?  
1. It truncates hyphen - before the "This is an Example" now i added ([\r\n+])(.*)(This is an Example).* it captures everthing. But the events are broken into single lines. I have set SHOULD_LINE_MER... See more...
1. It truncates hyphen - before the "This is an Example" now i added ([\r\n+])(.*)(This is an Example).* it captures everthing. But the events are broken into single lines. I have set SHOULD_LINE_MERGE = false.  2. Yes props.conf is on the proper component 3.  i verified using this command       splunk btool inputs list --debug  (there is no other setting that is overwriting LINE_BREAKER) NOTE:  can i use BREAK_ONLY_BEFORE instead of LINE_BREAKER
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. ... See more...
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." The console isnt very helpful.  common.js:1702 Error: Script error for: util/console http://requirejs.org/docs/errors.html#scripterror at makeError (eval at e.exports (common.js:1:1), <anonymous>:166:17) at HTMLScriptElement.onScriptError (eval at e.exports (common.js:1:1), <anonymous>:1689:36) // Tokenize.js require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } // Main $('.dashboard-body').on('click', '[data-on-class],[data-off-class],[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); console.log("Inside the click"); var target = $(e.currentTarget); console.log("here"); console.log("target.data('on-class')=" + target.data('on-class')); var cssOnClass= target.data('on-class'); var cssOffClass = target.data('off-class'); if (cssOnClass) { $("." + cssOnClass).attr('class', cssOffClass); target.attr('class', cssOnClass); } var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { var tokens = unsetTokenName.split(","); var arrayLength = tokens.length; for (var i = 0; i < arrayLength; i++) { setToken(tokens[i], undefined); //Do something } //setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });  
Using just Splunk, you could do an ugly hack and send to another HF instance on which you'd force input data to go through typing queue again, not skip straight to indexing queue. But this is a very ... See more...
Using just Splunk, you could do an ugly hack and send to another HF instance on which you'd force input data to go through typing queue again, not skip straight to indexing queue. But this is a very unusual and unintuitive design. You might be able to use Cribl but I'm not sure about that.
1. Are you sure the LINE_BREAKER is right? I mean - the capture group in the LINE_BREAKER will be treated as the line breaker and will be removed from the stream. Are you sure you want to cut this mu... See more...
1. Are you sure the LINE_BREAKER is right? I mean - the capture group in the LINE_BREAKER will be treated as the line breaker and will be removed from the stream. Are you sure you want to cut this much? Not more, not less? Also you usually include \r and/or \n explicitly in your line breaker definition. Otherwise the results might not be what you expect. 2. Are you sure you're putting your props.conf on the proper component in your environment? 3. Did you verify with btool that there is no other setting overwriting your line breaker?
Hi @PickleRick , Do you know if there is any possible method of sending data from a splunk HF to a 3rd party endpoint that also maintains the metadata
You can do conditional lookup using the eval form of lookup https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjs... See more...
You can do conditional lookup using the eval form of lookup https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.26gt.3B.2C.26lt.3Bjson_array.26gt.3B.29 | eval LookupResult=if(host_type="application", lookup("clientlist.csv", json_object("hostname", host), json_array("clientcode")), null()) You will get back a field called LookupResult like  {"clientcode":"abc"} and you can then extract the value abc from the result.