The Splunk app for Linux already provided a stanza for collecting all the .log files in the /var/log folder ([monitor::///var/log]). But what if I want to write specific regex/transformations for spe...
See more...
The Splunk app for Linux already provided a stanza for collecting all the .log files in the /var/log folder ([monitor::///var/log]). But what if I want to write specific regex/transformations for specific .log file, given its path. For example, I want to apply transformation by writing specific stanzas in props.conf and transforms.conf for file /var/log/abc/def.log and /var/log/abc/ghi.log. How to make these have the same sourcetype as "alphabet_log" and then write its regex functions? I also have a question regarding the docs from Splunk In the props.conf docs, it stated that: For settings that are specified in multiple categories of matching [<spec>]
stanzas, [host::<host>] settings override [<sourcetype>] settings.
Additionally, [source::<source>] settings override both [host::<host>]
and [<sourcetype>] settings. What does "override" here mean? Does it override everything, or it combines and only override the duplicate configs?
A typical way to do this is via a background search that uses the actual time token to run the search and then set tokens based on addinfo output which gives you info_min and max times, e.g. <search...
See more...
A typical way to do this is via a background search that uses the actual time token to run the search and then set tokens based on addinfo output which gives you info_min and max times, e.g. <search>
<query>
| makeresults
| addinfo
</query>
<earliest>$schedule_dttm.earliest$</earliest>
<latest>$schedule_dttm.latest$</latest>
<done>
<set token="schedule_dttm_epoch_earliest">$result.info_min_time$</set>
<set token="schedule_dttm_epoch_latest">$result.info_max_time$</set>
</done>
</search> then you just use the new tokens in searches. You can also do a similar thing with a subsearch by using addinfo to get the actual converted time and returning earliest and latest fields from the subsearch, but that's not the same as saving the epoch values.
Building on what @ITWhisperer says about join, the logic for avoiding join is to use stats, so you want to do something like this index=idx1 ... (identifiers here)
| rex "EventId: (?<event_id_1>\d+...
See more...
Building on what @ITWhisperer says about join, the logic for avoiding join is to use stats, so you want to do something like this index=idx1 ... (identifiers here)
| rex "EventId: (?<event_id_1>\d+)"
| rex "\"EventId\",\"value\":\"(?<event_id_2>\d+)"
| eval event_id=coalesce(event_id_1, event_id_2)
| stats values(*) as * count by event_id
| where count=1 so your two rex statements capture to their own fields and then you find the common field event_id with coalesce, then the stats count will count them. Depending on what your data looks like and how many events you would actually get, the stats statement can be adjusted to get the correct count of your 2 distinct message types. The values(*) as * carries all of the other fields you want to preserve through the stats, so use a fields statement before stats to restrict what you want out.
The ttl of my latest job for this report is set to expire approx 2 days (~170000+ seconds) from now. I've been using the _audit index to check the resultCount for these jobs and it has never been 0. ...
See more...
The ttl of my latest job for this report is set to expire approx 2 days (~170000+ seconds) from now. I've been using the _audit index to check the resultCount for these jobs and it has never been 0. I've also been checking the ttl via the Activity > Jobs view (2nd image in screenshot, had to take a screenshot of my two screenshots to get past the 1 attachment limitation ). Is it possible that having multiple saved jobs from this search alive at the same time causes an issue? I have two alive currently, one from yesterday morning and one from this morning (similar to what you're showing in the screenshot). If this is a possibility, any recommendations on how I can only have 1 report job alive/kept at any point in time?
Trying to filter out all perfmon data using ingest actions. so, i try and see the samples and i get this error I checked to see if my forwarders have the same pass4SymmKey and they did. I am no...
See more...
Trying to filter out all perfmon data using ingest actions. so, i try and see the samples and i get this error I checked to see if my forwarders have the same pass4SymmKey and they did. I am not sure what to do im checking now to ensure the FW isnt blocking communication but i think that is unlikely. I can see the servers in forwarder management picking up the deployment apps from the indexer. anyone have any ideas??
We have recently updated our deployment server to version 9.4.1. Whenever page loads the default view has GUID of the clients lacking hostname and IP. Every time you have to click the gear on the ri...
See more...
We have recently updated our deployment server to version 9.4.1. Whenever page loads the default view has GUID of the clients lacking hostname and IP. Every time you have to click the gear on the right side to select the extra fields. This is not persistent and you sometimes have to do it again. How do we make it persistent?
Hi @dinesh001kumar Its not currently possible to automatically switch between tabs in Dashboard Studio, the only thing you could do is use a browser plugin to navigate to the relevant URL on a rota...
See more...
Hi @dinesh001kumar Its not currently possible to automatically switch between tabs in Dashboard Studio, the only thing you could do is use a browser plugin to navigate to the relevant URL on a rotator at the required interval. The other thing I wanted to check - Are those 10 panels backed by a single base-search? Rotating 10 panels every 20 seconds feels like a quick way to use up a load of resources in your Splunk stack! I'd be careful taking this approach but I appreciate you may have already considered this. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Firstly, try to avoid joins they are limited and slow. Secondly, the most productive way of getting an answer to your question is to provide sample events (in raw form in a code block using the </> ...
See more...
Firstly, try to avoid joins they are limited and slow. Secondly, the most productive way of getting an answer to your question is to provide sample events (in raw form in a code block using the </> button) which demonstrate your issue, e.g. event which have matching event ids and events which don't. It would also be useful to know what fields you already have extracted (so we don't have duplicate any extractions you already have set up).
There is a process I'm trying to track. It starts by generating a single event. Then asynchronously a second event is created. My problem is that the async process often fails. I would like to find a...
See more...
There is a process I'm trying to track. It starts by generating a single event. Then asynchronously a second event is created. My problem is that the async process often fails. I would like to find all occurences of the first event that do not have a corresponding second event. I know how to search for each event independently. They share a couple common identifiers that can be extracted. I have tried a subsearch and a join but have not gotten any results. As a compressed and simplified example, here is my pseudo search index=idx1 ... (identifiers here) | rex "EventId: (?<event_id>\d+)" | join type=left event_id [ search index=idx1 ... (identifiers here) | rex "\"EventId\",\"value\":\"(?<event_id>\d+)" ] Both events occur at about the same time, usually within a second. They share the EventId extracted field which can be considered unique within the time period I'm searching. Limits are not an issue as this process occurs about 100 times a day. So how can I list out the EventIds from the main search that do not have a match in the second search? Thank you experts!
Try an eval element with a case function. <eval token="foo">case(schedule_dttm.latest="now", now(), is_num(schedule_dttm.latest), schedule_dttm.latest, 1==1, relative_time(now(), schedule_dttm.lates...
See more...
Try an eval element with a case function. <eval token="foo">case(schedule_dttm.latest="now", now(), is_num(schedule_dttm.latest), schedule_dttm.latest, 1==1, relative_time(now(), schedule_dttm.latest)</eval>
Hi everyone. I have a token called "schedule_dttm" that has two attributes: "earliest" and "latest". By default, "schedule_dttm.latest" is initialized with "now()", but it can hold data in three d...
See more...
Hi everyone. I have a token called "schedule_dttm" that has two attributes: "earliest" and "latest". By default, "schedule_dttm.latest" is initialized with "now()", but it can hold data in three different formats: the "now" I just mentioned, a specific epoch timestamp and a relative timestamp such as "-1h@h". My goal is to convert all of them to epoch timestamp, so the second case is trivial for me. But how do I (1) check which format is the date in and (2) create a logic to convert it properly conditionally based on the format its at? Thanks in advance, Pedro
I have created Studio Dashboard in Splunk cloud. I have created multiple panels in each tab, for example 10 panel per tab in single Studio Dashboard. Is there any way we can configure to auto-rotate ...
See more...
I have created Studio Dashboard in Splunk cloud. I have created multiple panels in each tab, for example 10 panel per tab in single Studio Dashboard. Is there any way we can configure to auto-rotate to each and every tabs for every 20 secs.
Tried this, but I get a new row in table3.csv That is exactly what appendpipe is supposed to do. If this is not acceptable, do not use appendpipe. Have you tried fillnull @gcusello and I have su...
See more...
Tried this, but I get a new row in table3.csv That is exactly what appendpipe is supposed to do. If this is not acceptable, do not use appendpipe. Have you tried fillnull @gcusello and I have suggested?
We're trying to suppress the warnings for reports that use dbxlookup command to enrich data in the report. We have a pretty simple setup with one search head and indexer. We created a commands.conf...
See more...
We're trying to suppress the warnings for reports that use dbxlookup command to enrich data in the report. We have a pretty simple setup with one search head and indexer. We created a commands.conf file under the $SPLUNK_HOME/etc/system/local/ folder with the following contents. There are no commands.conf files anywhere else on the system except under the defaults folders. After restarting, nothing changed. # Disable dbxlookup security warnings in reports [dbxlookup] is_risky = false Thinking that perhaps this needed to be added under our app local folder, we moved the file there and restarted. Once done, we encountered java and python errors running any reports with dbxlookups. What are we missing? Thanks!
Hi @Cole-Potter , you need a Splunk Enterprise Licernse, the dimension depends on the volume of the indexed logs. For this reason I hint to avoid to locally index. Ciao. Giuseppe
Sorry about the week late reply but that does not seem to work. I am still getting logs that i dont need i just disabled ingestion from that folder location. Does splunk have any app that would filte...
See more...
Sorry about the week late reply but that does not seem to work. I am still getting logs that i dont need i just disabled ingestion from that folder location. Does splunk have any app that would filter data easier than creating the transforms and props.conf files?