All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

if select 24 hours in time filter, is there any automatic way to pass the 24hrs time rage to start date and end date??
Hi @lbrhyne, there's an issue in field extraction in Splunk when you have in your logs one or more backslashes, in my opinion it's a bug and I discussed with Support specialists about this. In this... See more...
Hi @lbrhyne, there's an issue in field extraction in Splunk when you have in your logs one or more backslashes, in my opinion it's a bug and I discussed with Support specialists about this. In this case use (trying) three or four backslashes instead of two as in regex101. You must use this workaround if you use the rex command in a search. If instead you want to use the regex for a field extraction not in a search, you have to use the regex that works in regex101. ciao. Giuseppe
Hi @man03359 , at first, in frozenTimePeriodInSecs, don't use commas. then, the meaning of the four statuses is the following: Hot: just indexed data, in a bucket with in progress tsdindexes creat... See more...
Hi @man03359 , at first, in frozenTimePeriodInSecs, don't use commas. then, the meaning of the four statuses is the following: Hot: just indexed data, in a bucket with in progress tsdindexes creation and usable for on-line searches, Warm: data indexed from few days, that are used by the most searches and usable for on-line searches, they usually are located in high performances storage (at least 800 IOPS, better more), Cold: not so recent data, used by few searches and usable for on-line searches, they usually are located in less expensive storages, Frozen: data that are stored off line but that it's possible to recoved copying the entire bucket in the thawed folder, to have frozen data, you must configure Splunk to save them, by default dey are deleted. Data roll to frozed after the earliest event of a bucket exceeds the retention period, for this reason you could have , in your searches, data before the retention period. if you use a short retention period and you index few data, your bucket could directly pass from Warm to frozen or be deleted. It's very difficoult that a data directly pass from Hot to Frozed because a bucket rolls from Hot to Warm when it reaches 10 GB or after three days, you should have a retention period less than three days and have less than 10 GB in this period. For more details see at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Setaretirementandarchivingpolicy and https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Howindexingworks Ciao. Giuseppe
Have a look at Replicate a subset of data to a third-party system You can modify it and do something like this props.conf [your-sourcetype-here] TRANSFORMS-routing = routeAll transforms.conf [rou... See more...
Have a look at Replicate a subset of data to a third-party system You can modify it and do something like this props.conf [your-sourcetype-here] TRANSFORMS-routing = routeAll transforms.conf [routeAll] REGEX=(.) DEST_KEY=_TCP_ROUTING FORMAT=yourIndexer,ThirdParty outputs.conf [tcpout] defaultGroup=nothing [tcpout:yourIndexer] disabled=false server=10.1.12.1:9997 [tcpout:ThirdParty] disabled=false sendCookedData=false server=10.1.12.2:1234  
I'm trying to forward logs base on index to a third-party system, and at the same time, I still need to retain the logs in Splunk. I've tried adding tcpout in outputs.conf, but it only pushing all lo... See more...
I'm trying to forward logs base on index to a third-party system, and at the same time, I still need to retain the logs in Splunk. I've tried adding tcpout in outputs.conf, but it only pushing all logs to the third-party system, and doesn't store logs into Splunk. Unable to search new log in Splunk. [tcpout] defaultGroup=index1   [tcpout:index1] sendCookedData=false (tried with and without this, both doesn't work) server=1.1.1.1:12468
Hi, I am starting with splunk admin and is confused about one topic. It might be silly. While creating an index, we get the option to set the Searchable Retention (in days), I have read from the do... See more...
Hi, I am starting with splunk admin and is confused about one topic. It might be silly. While creating an index, we get the option to set the Searchable Retention (in days), I have read from the documents that splunk has 4 bucket, hot, warm, cold, and frozen. My question is suppose I have set it as 90 days, while this 90 days period will the data be in hot bucket for the entire 90 days and will roll to frozen after 90 days period is over. Also how different is setting 90 days under the Searchable Retention and setting this below- [main] frozenTimePeriodInSecs = 7,776,000  Please explain. Thanks in advance.
5 years later and I came upon this. This should be the actual, concise answer for OP's question.
You need to include agent logs, and that will collect daemonset, clusterReceiver and gateway logs, you can configure it using this option - https://github.com/signalfx/splunk-otel-collector-chart/blo... See more...
You need to include agent logs, and that will collect daemonset, clusterReceiver and gateway logs, you can configure it using this option - https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/values.yaml#L572
@kamlesh_vaghela  Sir,  May I ask for a help , If I want to limit user to use this method  Could u please teach me how tow set the least privilege I just know that config power user and e... See more...
@kamlesh_vaghela  Sir,  May I ask for a help , If I want to limit user to use this method  Could u please teach me how tow set the least privilege I just know that config power user and edit-kvstore , but I don't want give the normal user to much  thanks for help  
Brilliant, @yuanliu. Both solutions work. Thanks again.
how do we enable Otel gateway logs to flow through to Splunk.   Even when we use the values.yaml settings noted here, we don't see any logs from the gateway:  https://github.com/signalfx/splunk-ot... See more...
how do we enable Otel gateway logs to flow through to Splunk.   Even when we use the values.yaml settings noted here, we don't see any logs from the gateway:  https://github.com/signalfx/splunk-otel-collector-chart/blob/main/examples/collector-gateway-only/collector-gateway-only-values.yaml   We're looking to get the gateway logs to get a better understanding of the health of the gateway 
Hi, on here, I previously had success working with CSS Selectors for Splunk Dashboards with the help of people here. My previous question solved. So please understand I understand CSS Selectors and ... See more...
Hi, on here, I previously had success working with CSS Selectors for Splunk Dashboards with the help of people here. My previous question solved. So please understand I understand CSS Selectors and I've been bashing myself at this for hours. What I have is a standard bar chart of 2 series over time. I am trying to use CSS selector to move the first series' bar position such that it overlaps with the second bar next to it, to give it an appearance the smaller bar is a sub-set of the larger bar. I have attached a photo using Google Inspect on the bar in Splunk Dashboard. You can see the bar for the first series has the class g.highcharts-series.highcharts-series-0 On the right side, you can see I injected a CSS selector into the webpage and no combination of positioning seem to make the series budge, if at all. As of note, I did find this paragraph on Highcharts website - https://www.highcharts.com/docs/chart-design-and-style/style-by-css#what-can-be-styled However, layout and positioning of elements like the title or legend cannot be controlled by CSS. This is a limitation of CSS for SVG, that does not (yet - SVG 2 Geometric Style Properties) allow geometric attributes like x, y, width or height. And even if those were settable, we would still need to compute the layout flow in JavaScript. Instead, positioning is subject to Highcharts JavaScript options like align, verticalAlign etc. Okay so, the bars probably cannot be moved. That is a very unfortunate limitation. Is it possible to make 2 bars overlap each other on a Splunk dashboard at all? I know I can workaround with using math to subtract a series from another and stack the bars, but it is only a workaround.
Hi All, We have index=gems, in the index we have configured gems servers and wms servers and also created one alert. The alert name is CBSIT Alert GEMS NFS stale. So, we want create an alert for ... See more...
Hi All, We have index=gems, in the index we have configured gems servers and wms servers and also created one alert. The alert name is CBSIT Alert GEMS NFS stale. So, we want create an alert for wms servers with the same alert . So, here for us a single alert should contain gems alert name when gems server alert trigger and WMS alert name when WMS server alert trigger. In the index=gems having gems servers 7 and wms servers 7 Ex : Gems server name sclpisgpgemspapp001 WMS server name silpdb5300.ssdc.albert.com We are using below SQL query for CBSIT Alert GEMS NFS stale Alert name : CBSIT Alert GEMS NFS stale   index = "gems" source = "/tmp/unresponsive" sourcetype=cmi:gems_unresponsive | table host _raw| eval timestamp=strftime(now(),"%Y-%m-%d %H:%M:%S") | eval correlation_id=timestamp.":".host | eval assignment_group = "CBS IT - Application Hosting - Unix",impact=3, category="Application",subcategory="Repair/Fix" , contact_type="Event", customer="no573", state=4, urgency=3 , ci=host | eval description = _raw , short_description = "NFS stale on ".host   Can you please help us here.  
Or, in the old fashioned extract (aka kv) and foreach ,   | rename _raw AS temp, provTimes AS _raw | rex mode=sed "s/\S+=/provTimes_&/g" | kv | foreach provTimes_* [eval sum = mvappend(sum, '<... See more...
Or, in the old fashioned extract (aka kv) and foreach ,   | rename _raw AS temp, provTimes AS _raw | rex mode=sed "s/\S+=/provTimes_&/g" | kv | foreach provTimes_* [eval sum = mvappend(sum, '<<FIELD>>')] | eval sum = sum(sum) ``` below are cleanups, only if you want to restore world order ``` | fields - provTimes_* | rex mode=sed "s/provTimes_//g" | rename _raw AS provTimes, temp AS _raw   Here is an emulation you can play with and compare with real data   | makeresults format=csv data="provTimes a=10; b=15; c=10; x=10; b=5;" ``` data emulation above ```   Output from this emulation is provTimes sum a=10; b=15; c=10; 35 x=10; b=5; 15
The following expression works in regex101: https://regex101.com/r/4D68Ip/1 But not in Splunk. Any help would be appreciated   (?i)nTimeframe\s+\(\w+\)\s+\w+\s+\w+\s+\%\s+\w+\\\w+\\\w+\\\w+\\\w\d+... See more...
The following expression works in regex101: https://regex101.com/r/4D68Ip/1 But not in Splunk. Any help would be appreciated   (?i)nTimeframe\s+\(\w+\)\s+\w+\s+\w+\s+\%\s+\w+\\\w+\\\w+\\\w+\\\w\d+\:\d+\-\d+\:\d+\\\w+\\\w+\\\w+\\\w(?P<Successful>\d+)   We are attempting to extract 58570 from the below string. TEST STING   run.\r\nTimeframe (PT) Success Failed % Failed\r\n\r\n05:15-06:14\r\n\r\n58570\r\n\r\n681\r\n\r\n1.15\r\n\r\nIf you believe you've received this email in error, please see your Splunk"}    
I cannot seem to reproduce your results.  I put your sample CSV into my laptop.  The setting is   [dhcp_timebased_lookup] batch_index_query = 0 case_sensitive_match = 1 filename = dhcp_timebased_lo... See more...
I cannot seem to reproduce your results.  I put your sample CSV into my laptop.  The setting is   [dhcp_timebased_lookup] batch_index_query = 0 case_sensitive_match = 1 filename = dhcp_timebased_lookup.csv max_offset_secs = 691200 time_field = time   Then, I run this search   | makeresults | eval dest_ip = "10.223.5.43" | stats values(dest_ip) as dest_ip by _time | lookup dhcp_timebased_lookup ip AS dest_ip output time hostname | eval lag = _time - time | eval time = strftime(time, "%F %T")   It gives _time dest_ip hostname lag time 2024-02-28 14:16:08 10.223.5.43 host-43 64871 2024-02-27 20:14:57 I had a suspicion about that time_format setting. (Splunk defaults to using second.)  But even after I add this, it still returns result. Could you try this on a spare instance?
IMO this might get a lot more fiddly than you think.  If the system is sending in logs all the time (like, let's use for example they're Apache web servers), then even if you restart one, it'll come... See more...
IMO this might get a lot more fiddly than you think.  If the system is sending in logs all the time (like, let's use for example they're Apache web servers), then even if you restart one, it'll come back online and catch up almost immediately.  You can take that farther - even if you turn off the forwarder for a week, when it gets turned back on it'll catch back up (possibly in just a couple of minutes!).    A week of "not reporting" just gets swept under the rug. If you actually have *gaps* in your logs, this is ... an entirely separate issue and you should solve that, because that's not a thing that should really happen and it's generally fixable. A bit better might be to use some shenanigans with the _indextime field, but it's also unlikely to be easily converted to a proper "wasn't sending in data" type report.  It might be closer, but it's still a lot of extra work. Even better than that might be to read some of the _internal indexes (the metrics one comes to mind) to find out there which 30 second periods it wasn't responding in (or whatever).  That would be more accurate. But possibly best when it comes to that sort of information might be just getting the forwarder's messages that say "I'm restarting" and "I've restarted".  THOSE would be easier to calculate a proper "downtime" from.  You can start here: index=_internal source=*splunkd.log* ("shutdown complete" OR "Splunkd starting") As long as your internal retention is long enough, that should get you each stop/start sequence - you'll have to then eval some fields, do some stats with a 'by host' and so on, but it should get close.  You could also use the above by host and | collect it to a summary index to keep just that information around for longer than retention on _internal would normally allow. It will NOT tell you if the host was actually offline though - if you disconnect the network cable... well, you should try it and see what logs show up for that!  I don't think it can tell you it disconnected, only that it reconnected but maybe there's a "seconds I was unable to talk to you" in that message too.  (Don't think so). Anyway, I'm sure that's a lot more words than you were expecting, but I wanted to explain the pitfalls of some of the first attempts people make for this sort of answer, since they don't often work that well.  Also happy to continue helping once you've explored a bit on your own, so if you get stuck ... post again in here!
I am looking for help with Splunk configurations that the documentation does not seem to provide and can not be found on Splunk Answers. The problem is selected fields are not persisting between ses... See more...
I am looking for help with Splunk configurations that the documentation does not seem to provide and can not be found on Splunk Answers. The problem is selected fields are not persisting between sessions/alerts. I know this is possible since my old version of Splunk has this ability. Ex. 1. User clicks on drilldown search for Notable Event. User marks Selected Fields to use. 2. User closes tab and reopens the same drilldown search for that Notable Event. 3. Selected Fields are gone and it is back to its default state. How do I get selected fields to save per user?
I'm not completely clear on what the question is asking, but for the sake of a) getting discussion started to determine that and b) maybe this is what you are looking for anyway, ... The big search ... See more...
I'm not completely clear on what the question is asking, but for the sake of a) getting discussion started to determine that and b) maybe this is what you are looking for anyway, ... The big search for total time - those last three lines, try changing them to the following 5. ... | stats sum(time_difference) AS TOTAL, count as COUNT | eval AVERAGE_TIME_PER_EVENT = TOTAL / COUNT | fieldformat TOTAL = strftime(TOTAL, "%H:%M:%S") | eval AVERAGE_TIME_PER_EVENT = tostring(AVERAGE_TIME_PER_EVENT, "duration") | table COUNT, TOTAL, AVERAGE_TIME_PER_EVENT  I don't think I have any typos in the above, but maybe they exist.  Anyway, all I do differently is add the count into your stats, eval a new field that's the total / count.  Then I leave your fieldformat the same, but right after that I introduce you to "tostring" with a type of duration, that should do what you want just like the fieldformat (H:M:S) but will ALSO add "days" to the front when appropriate (like 1+14:21:54 for 1 day plus 14 hours, blah blah.).  You can change both to eval or use fieldformat for both - it's fine either way. Lastly I added the two new fields to the table. If this is no where near what you want the answer to, ... well, maybe clarify your question a bit!  Happy Splunking, Rich
Not sure why it only gives you one user.  But try these My mistake.  To use the search meta-keyword, format is required.  Try   Index="indexName" "eventType" = "user.authentication.sso" [inputloo... See more...
Not sure why it only gives you one user.  But try these My mistake.  To use the search meta-keyword, format is required.  Try   Index="indexName" "eventType" = "user.authentication.sso" [inputlookup "users.csv" | rename email AS search | format]   Or   Index="indexName" "eventType" = "user.authentication.sso" [inputlookup "users.csv" | stats values(email) AS search | format]     Sorry about my mistake.