All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @ITWhisperer   Tried that as i found the same on your other posts. But this is what is displays the result as .  How to make it  show as 1h, 45 mins ?  More easily readable format in days, hou... See more...
Thanks @ITWhisperer   Tried that as i found the same on your other posts. But this is what is displays the result as .  How to make it  show as 1h, 45 mins ?  More easily readable format in days, hours ,minutes.  
Hi @livehybrid,   Thanks for this. I looked at the parsing documentation earlier. Is there not a simpler way to do this as I don't need to rewrite fields etc as I have a TA doing it. All I need is ... See more...
Hi @livehybrid,   Thanks for this. I looked at the parsing documentation earlier. Is there not a simpler way to do this as I don't need to rewrite fields etc as I have a TA doing it. All I need is if syslog from this HOST/IP then send to index=opnsense, is this achievable with one parser config or is what you already stated the only way of doing it?
Try something like this | eval average_min=tostring(average_min,"duration")
Thanks a lot. I got the average_min as per your query but now how do i convert / represent it back to hours minutes format ? | stats avg(total_minutes) as average_min   For example your query gave ... See more...
Thanks a lot. I got the average_min as per your query but now how do i convert / represent it back to hours minutes format ? | stats avg(total_minutes) as average_min   For example your query gave me the average_min= 112.9  . How do i convert this back to show as 1 hour, 8 minutes 
Hi @alvinsullivan01  If this is a JSON event then you should be able to use  TIMESTAMP_FIELDS = timestamp #Also adjust TIME_FORMAT to include the timezone. TIME_FORMAT = %Y-%m-%dT%H:%M:%S%z  Did... See more...
Hi @alvinsullivan01  If this is a JSON event then you should be able to use  TIMESTAMP_FIELDS = timestamp #Also adjust TIME_FORMAT to include the timezone. TIME_FORMAT = %Y-%m-%dT%H:%M:%S%z  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @L_Petch  If you havent already, check out https://splunk.github.io/splunk-connect-for-syslog/2.44.2/gettingstarted/create-parser/ which has a guide on how to create custom parsers. This is proba... See more...
Hi @L_Petch  If you havent already, check out https://splunk.github.io/splunk-connect-for-syslog/2.44.2/gettingstarted/create-parser/ which has a guide on how to create custom parsers. This is probably the best way because it prevents finding a potentially fragile alternative process for identifying and routing the data (e.g. when it lands in Splunk with Props/Transforms). Here is a sample to get you started: In /opt/sc4s/local/config/app_parsers/ # app-opnsense.conf application app-opnsense[sc4s-network-source] { filter { program("filterlog" flags(prefix)) or program("opnsense" flags(prefix)) or host("opnsense*" type(glob)) or message("opnsense" flags(substring)) }; parser { p_set_netsource_fields( vendor("pfsense") # Use pfsense as base product("opnsense") ); }; }; Then create the destination in /opt/sc4s/local/config/destinations/: # dest-opnsense.conf destination d_opnsense { splunk( class(splunk_hec) template("$(format-splunk-hec)") hec_token("YOUR_HEC_TOKEN") url("https://your-splunk:8088/services/collector/event") index("opnsense") source("${.splunk.source}") sourcetype("opnsense:syslog") ); }; log { source(s_network); filter(f_is_source_identified); if (match("opnsense" value("fields.sc4s_vendor"))) { destination(d_opnsense); flags(flow-control,final); }; };  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Silah  I dont think its possible to save these after the execution - however I wonder if you could use the javascript execution to send a metric to Splunk Obervability like this: function sendS... See more...
Hi @Silah  I dont think its possible to save these after the execution - however I wonder if you could use the javascript execution to send a metric to Splunk Obervability like this: function sendStockMetric() { const status = document.body.textContent.includes("OUT OF STOCK") ? 0 : 1; fetch('https://ingest.{REALM}.signalfx.com/v2/datapoint', { method: 'POST', headers: { 'X-SF-TOKEN': 'YOUR_TOKEN', 'Content-Type': 'application/json' }, body: JSON.stringify({ gauge: [{ metric: 'shop.item.stock_status', value: status, dimensions: { item: 'your_product_001', status: status === 1 ? 'instock' : 'oos' } }] }) }); } sendStockMetric();  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Sav... See more...
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Save return value from JavaScript" to this effect: function checkStockStatus() { if (document.body.textContent.includes("OUT OF STOCK")) { return "oos"; } return "instock"; } checkStockStatus();   I am storing this in a variable named stockStatus Is there any way I can use the value of this variable in a dashboard, or to trigger an alert ? For example, say I am selling a golden lamp, and it gets sold out, how can I get a dashboard to show "oos" somewhere ?   Thanks       
Hi @alvinsullivan01 , this seems to be a json format, the raw data in a json format are different than the visualized ones, so to check you regex, open the raw data visualization and not the json vi... See more...
Hi @alvinsullivan01 , this seems to be a json format, the raw data in a json format are different than the visualized ones, so to check you regex, open the raw data visualization and not the json visualization: probablu you have to add some backslash to your regex. Ciao. Giuseppe
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on... See more...
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on certain factors so guessing this would be the easiest way? If so does anyone have a good parser guide?
It looks like all these times are based on a common (other unshown) field. Perhaps it would be helpful it you shared more information about how these current fields are calculated as there may be a d... See more...
It looks like all these times are based on a common (other unshown) field. Perhaps it would be helpful it you shared more information about how these current fields are calculated as there may be a different calculation you can do to get you nearer to your desired result?
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and othe... See more...
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and other information in the beginning, and the data dictionary at the end. I want my data to go into Splunk in JSON format. Other than data, what I need is the timestamp. So I create a transform to pick only the data dictionary and move the timestamp into that dictionary. Here is my transforms. [test_add_timestamp] DEST_KEY = _raw REGEX = ([^\s]+)[^:]+:\s*(.+)} FORMAT = $2, "timestamp": "$1"} LOOKAHEAD = 32768 Here is my props to use the transforms. [test_log] SHOULD_LINEMERGE = false TRUNCATE = 0 KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true disabled = false pulldown_type = true TIME_PREFIX = timestamp MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT = %Y-%m-%dT%H:%M:%S TZ = UTC TRANSFORMS-timestamp = test_add_timestamp After transform, the data will look like this. {"data": {"a": 1, "b": 2, "c": 3}, "timestamp": "2025-07-20T10:15:30+08:00"} But when I search the data in Splunk, why do I see "none" as value in timestamp as well? Another thing I noticed is in my Splunk index that has many data, I can see few data has this timestamp extracted, and most of them have no timestamp, which is fine. But when I click "timestamp" under interesting fields, why is it showing only "none"? I also noticed some of the JSON keys are not available under "interesting fields". What is the logic behind this?
@neerajs_81  When it's formatted as strings like "1 hour, 29 minutes", you'll need to convert those time strings into a numeric format first(typically total minutes) before applying stats avg() Eg:... See more...
@neerajs_81  When it's formatted as strings like "1 hour, 29 minutes", you'll need to convert those time strings into a numeric format first(typically total minutes) before applying stats avg() Eg: | your_base_search | eval time_parts=split(avg_time, ", ") | eval hours=tonumber(replace(mvindex(time_parts, 0), " hour[s]?", "")) | eval minutes=tonumber(replace(mvindex(time_parts, 1), " minute[s]?", "")) | eval total_minutes=(hours * 60) + minutes | stats avg(total_minutes) as average_min   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4... See more...
Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4834498890820478 29.00699334492286 P2 1 hour, 18 minutes 0.05428940107829018  1.3029456258789642 18.176737552737862 I need to display the average of the avg_time values. Since there is date/time involved, merely doing the below function is not working stats avg(avg_time) as average_ttc_overall  
Hi @livehybrid, Well, strange thing is that mongo7 is working for splunk v10 when installing vanilla. I would strongly guess that the update even does not work on intel macs also, because the mongo... See more...
Hi @livehybrid, Well, strange thing is that mongo7 is working for splunk v10 when installing vanilla. I would strongly guess that the update even does not work on intel macs also, because the mongo binaries needed for migration are not the in macos intel tarball.  Cheers, Andreas
Yes, you _can_ do that. It's just that managing search filters is less convenient and thus maintenance of search filters is less straightforward than simply selecting index access per role using norm... See more...
Yes, you _can_ do that. It's just that managing search filters is less convenient and thus maintenance of search filters is less straightforward than simply selecting index access per role using normal "allowed index" functionality. And you're still using search-time fields for limiting access and those search-time fields can be overriden by your users. There's no way around that as long as you don't have indexed fields to search by. To be fully honest - you already seem to have a relatively convoluted data architecture (judging from what you're saying) and you're stuck on building on top of that unless you do some rearchitecting, which might need quite a lot of effort. That's the point when you should engage an experienced consultant to take a look at your environment and your requirements and give you a more holistic analysis and recommendations. We can give you hints about how Splunk works and what _can_ be done with it but we won't tell you what you _should_ do in the end because we don't know the whole picture and we're not taking responsibility for the final decisions.
Yes that should be left as is.
Why can't I give srchFilter as the role index by default? What will be the drawback for this? For prod roles I need to mention summary index condition as well with their restricted service... How @sp... See more...
Why can't I give srchFilter as the role index by default? What will be the drawback for this? For prod roles I need to mention summary index condition as well with their restricted service... How @splunklearner told.. exactly the same way.. Below is the role created for non-prod   [role_abc]   srchIndexesAllowed = non_prod   srchIndexesDefault = non_prod   srchFilter = (index=non_prod)   Below is the role created for prod   [role_xyz]   srchIndexesAllowed = prod;opco_summary   srchIndexesDefault = prod   srchFilter = (index=prod) OR (index=opco_summary AND (service=juniper-prod OR service=juniper-cont)))
No, I don't think you can modify the behaviour of the collect command and hence the output event format (much). And yes, you'd have to do some magic like index!=your_specific_index OR (index=your_sp... See more...
No, I don't think you can modify the behaviour of the collect command and hence the output event format (much). And yes, you'd have to do some magic like index!=your_specific_index OR (index=your_specific_index AND <additional conditions...) Ugly, isn't it?
Is there any way that I can get rid of double quotes? And one more thing I noticed, if user has access to 10 roles (10 indexes) and we have applied srchFilter to 6 roles, then if they are searching w... See more...
Is there any way that I can get rid of double quotes? And one more thing I noticed, if user has access to 10 roles (10 indexes) and we have applied srchFilter to 6 roles, then if they are searching with other 3 indexes (which are not there in srchFilter), he is not seeing any results. It means, if I use srchFilter by default I need to include it's index name in srchFilter? srchFilter is getting added for all the roles with OR...