All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Right now I'm just running proof of concept.  I'll move the field definitions to the indexers later.  Right now I'm trying to detect if diff pos1=last(rxError) pos2=last-1(rxError) I want to detec... See more...
Right now I'm just running proof of concept.  I'll move the field definitions to the indexers later.  Right now I'm trying to detect if diff pos1=last(rxError) pos2=last-1(rxError) I want to detect when the value or rxError changes from last-1 to last.  Working on that.  
Hi @PickleRick  just to inform you. I have replaced below endpoint but still the mismatch of the timestamp issue persist.   
Thanks. But I research documentation how to enable HEC from configuration files - no results. And do not find any link how to enable management port. Maybe you can help with direct link?   $cat /op... See more...
Thanks. But I research documentation how to enable HEC from configuration files - no results. And do not find any link how to enable management port. Maybe you can help with direct link?   $cat /opt/splunkforwarder/etc/apps/splunk_httpinput/local/inputs.conf:   [http] disabled = 0     $cat /opt/splunkforwarder/etc/system/local/inputs.conf:   [http] disabled = 0 [http://input] disabled = 0     Used: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/UseHECusingconffiles  
We are collecting every 10 minutes and have about 1000 servers with another 1000 coming early next year.  We have interest long term in monitoring all of the output for general network health.  The t... See more...
We are collecting every 10 minutes and have about 1000 servers with another 1000 coming early next year.  We have interest long term in monitoring all of the output for general network health.  The task at hand is being able to check if there are network issues when we also notice Ceph OSD issues.  The advice for that is to look for dropped packets on the host side.  So, that is what I'm trying to capture and detect when the dropped packet value changes.  
Yes:    splunk = client.connect(host='localhost', port=8089, splunkToken='eyJraWQiOiJzc.....)
You have to use /services/collector/event?auto_extract_timestamp=true  
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configur... See more...
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configured Add-on for Unix and Linux. And what index will they appear in? Thanks! Inside /Applications/SplunkForwarder/etc/system/local i have: inputs.conf, outputs.conf, server.conf. inputs.conf     [monitor:///var/log/system.log] disabled = 0     outputs.conf     [tcpout:default-autolb-group] server = ip:9997 compressed = true [tcpout-server://ip:9997]     server.conf     [general] serverName = pass4SymmKey = [sslConfig] sslPassword = [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free        
adding appname in machine agent configuration file never work.  we need to add below parameter in Application startup file (for java agent) -Dappdynamics.agent.uniqueHostId=<unique_Host_Name> once... See more...
adding appname in machine agent configuration file never work.  we need to add below parameter in Application startup file (for java agent) -Dappdynamics.agent.uniqueHostId=<unique_Host_Name> once you add above line...restart the application/jvm and then restart the machine agent once. I will works for sure
Hi, @ITWhisperer , actually it is not subset. its just that im passing different token for taskand getting the 2nd table. In this case will coalesce will work? index=abc task="$task1$"|dedup compone... See more...
Hi, @ITWhisperer , actually it is not subset. its just that im passing different token for taskand getting the 2nd table. In this case will coalesce will work? index=abc task="$task1$"|dedup component1 |table component1 |append [index=abc task="$task2$" |dedup component2 |table component2] |table component1 component2  
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been require... See more...
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been required that all members of a cluster must have the same OS and version. I was thinking to simply add one new indexer (redhat 9 vm) at the time and dettach an old one forcinf the buckets count. So for a short-time the cluster would have members with different OS versions. Upgrading from Red-Hat 7 to Red.Had 9 directly in the splunk enviroment is not possible. I would like to know if there are critical issues to face while the migration is happening?  I hope the procedure won't last more than 2 days.
Assuming component1 is a subset of component2 (which you seem to be implying) | eval component=coalesce(component1, component2) | stats count by component | where count=1
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table compon... See more...
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table component1 component2 these are the 2 tables. I want to show the extra data which are in component2 and not in component1. How can i do it?
Hi @PickleRick  I am using this endpoint "/services/collector" And how I can explicitly use timestamp format in the endpoint ?  
Which HEC endpoint are you sending your data to? If you are using the /event endpoint if you don't explicitly set ?auto_extract_timestamp=true whatever settings you have in your props, they are _not_... See more...
Which HEC endpoint are you sending your data to? If you are using the /event endpoint if you don't explicitly set ?auto_extract_timestamp=true whatever settings you have in your props, they are _not_ applied and the timestamp must be specified explicitly along the event or is taken from the current timestamp on the receiver.
I don't think it's actually docummented anywhere since it's not normally meant for users to fiddle with. And I would strongly advise against trying to do that. I'd probably not want to do such thing... See more...
I don't think it's actually docummented anywhere since it's not normally meant for users to fiddle with. And I would strongly advise against trying to do that. I'd probably not want to do such thing myself in production environment. In a lab setup just for fun and to see how stuff works - sure, why not. But in prod? Hell, no. It's not about HF _sending_ data. It's about re-parsing incoming already parsed data (and additionally, this particular HF would need to actually _not_ send data anywhere else, just export it to syslog; it's actually a waste of resources I think).
1. OK. It's just that I'd probably just cut the whole "This is an example" line if it's just a constant delimiter between the events. 2. Where? And what does your ingestion process look like? 3. LI... See more...
1. OK. It's just that I'd probably just cut the whole "This is an example" line if it's just a constant delimiter between the events. 2. Where? And what does your ingestion process look like? 3. LINE_BREAKER is not defined at input level. It's defined in props but I'm assuming you meant "splunk btool props list", not inputs. If not, check props, not inputs. BREAK_ONLY_BEFORE is a setting used only when SHOULD_LINEMERGE is set to true and that case is best avoided (there are very very rare cases where it makes sense; if possible, avoid enabling line-merging)
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apa... See more...
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apache/2.4.6 and  server stats is working (http://localhost:8080/server-status?auto) I have referenced the follow document  and update the config file /etc/otle/collector/agent_config.yaml but I did not get any metrics about Apache !! https://docs.splunk.com/observability/en/gdi/opentelemetry/components/apache-receiver.html https://docs.splunk.com/observability/en/gdi/monitors-hosts/apache-httpserver.html Anybody kindly do me a favor to fix it thanks in adeavne #observability    
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicate... See more...
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicates a potential discrepancy in the timestamp parsing or configuration when handling live data. Could you please suggest me potential reson and cause? Additionally, it would be helpful to review the relevant props.conf configurations to ensure consistency   Sample data: {"@timestamp":"2024-11-19T12:53:16.5310804+00:00","event":{"action":"log","code":"10010","kind":"event","original":"Communication session on line {1:d}, lost.","context":{"parameter1":"12","parameter2":"2","parameter3":"6","parameter4":"0","physical_line":"12","connected_unit_type_code":"2","connect_logical_unit_number":"6","description":"A User Event message will be generated each time a communication link is lost. This message can be used to detect that an external unit no longer is connected.\nPossible Unit Type codes:\n2 Debug line\n3 ACI line\n4 CWay line","severity":"Info","vehicle_index":"0","unit_type":"NT8000","location":"0","physical_module_id":"0","event_type":"UserEvent","software_module_id":"26"}},"service":{"address":"localhost:50005","name":"Eventlog"},"agent":{"name":"ACI.SystemManager","type":"ACI SystemManager Collector","version":"3.3.0.0"},"project":{"id":"fleet_move_af_sim"},"ecs.version":"8.1.0"} Current props: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom #KV_MODE = json pulldown_type = 1 TIME_PREFIX = \"@timestamp\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%7N%:z mismatch timestamp Current results :   Note : I am using http event collector token to get the data into Splunk. Inputs and props settings are arranged under search app.  
Hi @winter4 , metadata are associated to Splunk, so you can maintain them only in Splunk, you cannot maintain them in a syslog to an external third party. So, your Indexer will receive logs with me... See more...
Hi @winter4 , metadata are associated to Splunk, so you can maintain them only in Splunk, you cannot maintain them in a syslog to an external third party. So, your Indexer will receive logs with metadata, instead the third party will receive logs without metadata. About metadata: sourcetype is a metadata of Splunk so it isn't relevant for a third party. host is usually present at the beginning of the syslog and the third party should only extract it. source is a metadata that you lose sending syslogs to a third party. Ciao. Giuseppe
@timgren  Just remove  util/console  and console form the require block. like require(['jquery', 'underscore', 'splunkjs/mvc'], function($, _, mvc) {   You can directly use console  object in yo... See more...
@timgren  Just remove  util/console  and console form the require block. like require(['jquery', 'underscore', 'splunkjs/mvc'], function($, _, mvc) {   You can directly use console  object in your JS code.   Full code. require(['jquery', 'underscore', 'splunkjs/mvc'], function($, _, mvc) { console.log("hieeeee"); function setToken(name, value) { var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } // Main $('.dashboard-body').on('click', '[data-on-class],[data-off-class],[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); console.log("Inside the click"); var target = $(e.currentTarget); console.log("here"); console.log("target.data('on-class')=" + target.data('on-class')); var cssOnClass= target.data('on-class'); var cssOffClass = target.data('off-class'); if (cssOnClass) { $("." + cssOnClass).attr('class', cssOffClass); target.attr('class', cssOnClass); } var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { var tokens = unsetTokenName.split(","); var arrayLength = tokens.length; for (var i = 0; i < arrayLength; i++) { setToken(tokens[i], undefined); //Do something } //setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); }); I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.