All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

To be fully honest, it's a "double donut" version.  I wouldn't be surprised if it was a bit buggy. 1. Don't just jump head-first into a version just because it's just been released. Unless there are... See more...
To be fully honest, it's a "double donut" version.  I wouldn't be surprised if it was a bit buggy. 1. Don't just jump head-first into a version just because it's just been released. Unless there are fixes for issues hitting you or patches for known vulnerabilities, there's usually no reason to upgrade. Splunk can handle a wide range of older forwarders pretty well. 2. What you can do to help in product development and bug fixing is to gather the installation logs and raise a support ticket. (and - if the problem isn't internal to the installer but can be bypassed or it's triggered by some specific set of conditions - share the knowledge)
I'm not sure but you might need to have to use the --user option as well. In my tests I don't see any output if I give --app but not give --user
Then @livehybrid 's solution should work. When you're getting data from a HF (or any other "full" Splunk instance) you're getting it as already parsed and it completely bypasses most of the props/tra... See more...
Then @livehybrid 's solution should work. When you're getting data from a HF (or any other "full" Splunk instance) you're getting it as already parsed and it completely bypasses most of the props/transforms mechanics, except for RULESETs.
If you're ok with the timestamp just being assigned to an event (no need to have it explicitly written in the event itself),  just parse out the timestamp, cut the whole header and just leave the jso... See more...
If you're ok with the timestamp just being assigned to an event (no need to have it explicitly written in the event itself),  just parse out the timestamp, cut the whole header and just leave the json part on its own. Timestamp recognition takes place very early in the ingestion pipeline so you can do this way and not have to have the "timestamp" field in your json. You'll just have the _time field.
1. As @ITWhisperer noticed - you might be reinventing the wheel since probably the value comes from some earlier time-based data so there could be no need for rendering and parsing this value back an... See more...
1. As @ITWhisperer noticed - you might be reinventing the wheel since probably the value comes from some earlier time-based data so there could be no need for rendering and parsing this value back and forth 2. Are you sure (you might be, just asking) that you want to calculate average of the averages? If the overall average is what you're seeking, an average of averages will not give you that.
Hi ITWhisperer, thank you very much. It is working exactly, as I wish. Thank you.
I like this, its an elegant solution. Let me try this
Thanks @ITWhisperer   Tried that as i found the same on your other posts. But this is what is displays the result as .  How to make it  show as 1h, 45 mins ?  More easily readable format in days, hou... See more...
Thanks @ITWhisperer   Tried that as i found the same on your other posts. But this is what is displays the result as .  How to make it  show as 1h, 45 mins ?  More easily readable format in days, hours ,minutes.  
Hi @livehybrid,   Thanks for this. I looked at the parsing documentation earlier. Is there not a simpler way to do this as I don't need to rewrite fields etc as I have a TA doing it. All I need is ... See more...
Hi @livehybrid,   Thanks for this. I looked at the parsing documentation earlier. Is there not a simpler way to do this as I don't need to rewrite fields etc as I have a TA doing it. All I need is if syslog from this HOST/IP then send to index=opnsense, is this achievable with one parser config or is what you already stated the only way of doing it?
Try something like this | eval average_min=tostring(average_min,"duration")
Thanks a lot. I got the average_min as per your query but now how do i convert / represent it back to hours minutes format ? | stats avg(total_minutes) as average_min   For example your query gave ... See more...
Thanks a lot. I got the average_min as per your query but now how do i convert / represent it back to hours minutes format ? | stats avg(total_minutes) as average_min   For example your query gave me the average_min= 112.9  . How do i convert this back to show as 1 hour, 8 minutes 
Hi @alvinsullivan01  If this is a JSON event then you should be able to use  TIMESTAMP_FIELDS = timestamp #Also adjust TIME_FORMAT to include the timezone. TIME_FORMAT = %Y-%m-%dT%H:%M:%S%z  Did... See more...
Hi @alvinsullivan01  If this is a JSON event then you should be able to use  TIMESTAMP_FIELDS = timestamp #Also adjust TIME_FORMAT to include the timezone. TIME_FORMAT = %Y-%m-%dT%H:%M:%S%z  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @L_Petch  If you havent already, check out https://splunk.github.io/splunk-connect-for-syslog/2.44.2/gettingstarted/create-parser/ which has a guide on how to create custom parsers. This is proba... See more...
Hi @L_Petch  If you havent already, check out https://splunk.github.io/splunk-connect-for-syslog/2.44.2/gettingstarted/create-parser/ which has a guide on how to create custom parsers. This is probably the best way because it prevents finding a potentially fragile alternative process for identifying and routing the data (e.g. when it lands in Splunk with Props/Transforms). Here is a sample to get you started: In /opt/sc4s/local/config/app_parsers/ # app-opnsense.conf application app-opnsense[sc4s-network-source] { filter { program("filterlog" flags(prefix)) or program("opnsense" flags(prefix)) or host("opnsense*" type(glob)) or message("opnsense" flags(substring)) }; parser { p_set_netsource_fields( vendor("pfsense") # Use pfsense as base product("opnsense") ); }; }; Then create the destination in /opt/sc4s/local/config/destinations/: # dest-opnsense.conf destination d_opnsense { splunk( class(splunk_hec) template("$(format-splunk-hec)") hec_token("YOUR_HEC_TOKEN") url("https://your-splunk:8088/services/collector/event") index("opnsense") source("${.splunk.source}") sourcetype("opnsense:syslog") ); }; log { source(s_network); filter(f_is_source_identified); if (match("opnsense" value("fields.sc4s_vendor"))) { destination(d_opnsense); flags(flow-control,final); }; };  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Silah  I dont think its possible to save these after the execution - however I wonder if you could use the javascript execution to send a metric to Splunk Obervability like this: function sendS... See more...
Hi @Silah  I dont think its possible to save these after the execution - however I wonder if you could use the javascript execution to send a metric to Splunk Obervability like this: function sendStockMetric() { const status = document.body.textContent.includes("OUT OF STOCK") ? 0 : 1; fetch('https://ingest.{REALM}.signalfx.com/v2/datapoint', { method: 'POST', headers: { 'X-SF-TOKEN': 'YOUR_TOKEN', 'Content-Type': 'application/json' }, body: JSON.stringify({ gauge: [{ metric: 'shop.item.stock_status', value: status, dimensions: { item: 'your_product_001', status: status === 1 ? 'instock' : 'oos' } }] }) }); } sendStockMetric();  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Sav... See more...
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Save return value from JavaScript" to this effect: function checkStockStatus() { if (document.body.textContent.includes("OUT OF STOCK")) { return "oos"; } return "instock"; } checkStockStatus();   I am storing this in a variable named stockStatus Is there any way I can use the value of this variable in a dashboard, or to trigger an alert ? For example, say I am selling a golden lamp, and it gets sold out, how can I get a dashboard to show "oos" somewhere ?   Thanks       
Hi @alvinsullivan01 , this seems to be a json format, the raw data in a json format are different than the visualized ones, so to check you regex, open the raw data visualization and not the json vi... See more...
Hi @alvinsullivan01 , this seems to be a json format, the raw data in a json format are different than the visualized ones, so to check you regex, open the raw data visualization and not the json visualization: probablu you have to add some backslash to your regex. Ciao. Giuseppe
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on... See more...
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on certain factors so guessing this would be the easiest way? If so does anyone have a good parser guide?
It looks like all these times are based on a common (other unshown) field. Perhaps it would be helpful it you shared more information about how these current fields are calculated as there may be a d... See more...
It looks like all these times are based on a common (other unshown) field. Perhaps it would be helpful it you shared more information about how these current fields are calculated as there may be a different calculation you can do to get you nearer to your desired result?
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and othe... See more...
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and other information in the beginning, and the data dictionary at the end. I want my data to go into Splunk in JSON format. Other than data, what I need is the timestamp. So I create a transform to pick only the data dictionary and move the timestamp into that dictionary. Here is my transforms. [test_add_timestamp] DEST_KEY = _raw REGEX = ([^\s]+)[^:]+:\s*(.+)} FORMAT = $2, "timestamp": "$1"} LOOKAHEAD = 32768 Here is my props to use the transforms. [test_log] SHOULD_LINEMERGE = false TRUNCATE = 0 KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true disabled = false pulldown_type = true TIME_PREFIX = timestamp MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT = %Y-%m-%dT%H:%M:%S TZ = UTC TRANSFORMS-timestamp = test_add_timestamp After transform, the data will look like this. {"data": {"a": 1, "b": 2, "c": 3}, "timestamp": "2025-07-20T10:15:30+08:00"} But when I search the data in Splunk, why do I see "none" as value in timestamp as well? Another thing I noticed is in my Splunk index that has many data, I can see few data has this timestamp extracted, and most of them have no timestamp, which is fine. But when I click "timestamp" under interesting fields, why is it showing only "none"? I also noticed some of the JSON keys are not available under "interesting fields". What is the logic behind this?
@neerajs_81  When it's formatted as strings like "1 hour, 29 minutes", you'll need to convert those time strings into a numeric format first(typically total minutes) before applying stats avg() Eg:... See more...
@neerajs_81  When it's formatted as strings like "1 hour, 29 minutes", you'll need to convert those time strings into a numeric format first(typically total minutes) before applying stats avg() Eg: | your_base_search | eval time_parts=split(avg_time, ", ") | eval hours=tonumber(replace(mvindex(time_parts, 0), " hour[s]?", "")) | eval minutes=tonumber(replace(mvindex(time_parts, 1), " minute[s]?", "")) | eval total_minutes=(hours * 60) + minutes | stats avg(total_minutes) as average_min   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!