All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A short answer - there is no way to reliably find all its uses. A longer answer - there are so many ways to "use" an index that while you can try to "grep" some of them (mostly by calling proper RES... See more...
A short answer - there is no way to reliably find all its uses. A longer answer - there are so many ways to "use" an index that while you can try to "grep" some of them (mostly by calling proper REST endpoints and filtering the output properly), you can never tell whether someone didn't use some macro that expands to "index=whatever" or a subsearch resulting in such condition when evaluated. So you can try but they can hide BTW, how would you want to "use index in a lookup"?
No. Range of events contained within a bucket is stored in bucket directory name. So Splunk can easily judge whether to use that bucket in search. And in clustered setup Cluster Manager knows which p... See more...
No. Range of events contained within a bucket is stored in bucket directory name. So Splunk can easily judge whether to use that bucket in search. And in clustered setup Cluster Manager knows which peer has which buckets so Splunk can then decide to not even dispatch a search to peers which do not hold buckets covering interesting period of time. You can check bucket parameters using the dbinspect command. An example from my home installation: Splunk has no way of knowing when those events were indexed until it opens the bucket and reads contents of the tsidx files. Typically events are indexed in the past vs. the time they should have happened but sometimes you can ingest data regarding events which are suppose to happen in the future so there is no way of telling whether events within a bucket containing events with timestamps ranging from A to B were indexed before A, between A and B or after B. That's why limiting by indextime is "retroactive" - you do that on already opened bucket. Can't tell you about the drilldowns because I've never used indextime for anything else than troubleshooting Splunk. The only use case I see where index time would be appropriate is if you have sources with highly unreliable time sources and thus reporting higly unreliable timestamps. Of course then you'd probably want to define your sourcetype to use current timestamp on ingestion instead of parsing out the time from the event but sometimes you can't pinpoint the troublesome sources or you don't want to redefine the sourcetypes (for example you have a separate data admin responsible for onboarding data and you're only responsible for ES). In such case one could indeed try to use indextime to search using that on a selected subset of data. But it's a relatively uncommon scenario.
Basically he is showing how to change the .py files in the /bin directory. These changes would be overwritten if the app is updated.
I guess the main reason I was interested in index time was because it solved issues with ingestion delays or outages. Splunk outlines what I'm talking about in their docs: "Selecting Index Time when... See more...
I guess the main reason I was interested in index time was because it solved issues with ingestion delays or outages. Splunk outlines what I'm talking about in their docs: "Selecting Index Time when configuring correlation searches prevents event lag and improves security alerting. This is because configuring correlation searches using the Index time range can more effectively monitor data that arrives late and run the correlation searches against that data. Therefore, configure and run correlation searches by Index time to avoid the time lag in the search results and focus on the most recent events during an investigation. For example: Deploy a correlation search (R1) that runs every five minutes and checks for a particular scenario (S1) within the last 5 minutes to fire an alert whenever S1 is found. Correlation searches are based on extracted time. So, when S1 events are delayed by five minutes, no alerts might be triggered by R1 because the five minute window checked by the continuous, scheduled R1 never re-scans the events from a previous, already checked window. Despite those delayed events being known to exist, R1 is already set to check another time window, thereby, missing the opportunity to detect S1 behavior from delayed events. When correlation searches use extracted time, some events may land on the indexers a bit later due to a transport bottleneck such as network or processing queue. Event lag is a common concern for some Cloud data sources because some data might take an extended period of time to come into Splunk after the event is generated."
The data coming into one of our indexers recently changed. Now the format is different, and the fields are different. The values are basically the same. I need to be able to find where this index and... See more...
The data coming into one of our indexers recently changed. Now the format is different, and the fields are different. The values are basically the same. I need to be able to find where this index and data was being used in our environment's lookups, reports, alerts and dashboards. Any idea how this can be accomplished?
Just wanted to provide an update to my issue. So, it looks like, the problem was with the splunkfwd user that gets created during the UB install didn't have permission to /var/log. After I changed (s... See more...
Just wanted to provide an update to my issue. So, it looks like, the problem was with the splunkfwd user that gets created during the UB install didn't have permission to /var/log. After I changed (setfacl -R -m u:splunkfwd:rX /var/log), I started seeing logs in my indexer.    Thanks everyone for your help. 
I have similar issue. The data we had coming into one of our indexes, has now switched to a different format and slightly different field/value pairs. Now I am tasked with finding, where this index/d... See more...
I have similar issue. The data we had coming into one of our indexes, has now switched to a different format and slightly different field/value pairs. Now I am tasked with finding, where this index/data is being used in lookups, reports, alerts, etc.... So we can change the SPL To match the new data. 
A very interesting answer. I'm a little confused when you say: "So even if you're searching limiting index time, Splunk still has to use some limits for _time or search through All Time." It seems li... See more...
A very interesting answer. I'm a little confused when you say: "So even if you're searching limiting index time, Splunk still has to use some limits for _time or search through All Time." It seems like index time could be that limit, no? Trying to answer that question on my own, the issue it seems is that events in Splunk are fundamentally organized by event time. And when searching with index time, Splunk does not have that "inherent" knowledge of what the index time is for each event, like it does with event time. Therefore, it must search over All Time in order to gather that information. Does that sound correct? I guess my follow up to all of this would be: in what situations is it ever appropriate to use index time instead of event time (specifically in the context of alert creation)? That and: what exactly is the effect on drilldowns?
In case of datamodel it's called acceleration. It's a process which runs a scheduled search extracting fields from datamodel data and indexing them in tsidx summary files for efficient searching later.
You're running it on different sets of data, right? So how are we supposed to know what and why is the correct result? Anyway, you're overthinking it. Replace the elaborate evals in your timechart w... See more...
You're running it on different sets of data, right? So how are we supposed to know what and why is the correct result? Anyway, you're overthinking it. Replace the elaborate evals in your timechart with | timechart span=5m  count by status_summary Oh, and please post searches in either code block or preformatted style. Makes them much more readable.
1. Browse through logs directly from process startup. If there are some issues with - for example - certificate file readability, you should have your errors there 2. Check the logs from the other s... See more...
1. Browse through logs directly from process startup. If there are some issues with - for example - certificate file readability, you should have your errors there 2. Check the logs from the other side of the connection. They often tell more.
Splunk always uses event time to limit search range. It's the first and most important way of limiting the searched events because Splunk only searches through buckets holding events from given time ... See more...
Splunk always uses event time to limit search range. It's the first and most important way of limiting the searched events because Splunk only searches through buckets holding events from given time range. So even if you're searching limiting index time, Splunk still has to use some limits for _time or search through All Time. Also indextime is not included in any of the datamodels (and it shouldn't be - that makes no sense - it's the event's time that's important, not when it was finally reported). Therefore as it is not a part of datamodel fields, it will not be included in DAS. And since it's not included, you can't search by it. "Index time filters" means adding conditions matching index time ranges. And as I said before, _time is one thing, _indextime is another. Since you want to filter by _indextime you have no way of knowing what _time those events have. And since events are bucketed by _time, you have to search through All Time to find your events limited by _indextime. It's just how Splunk works. Generally speaking, _indextime is a nice tool for troubleshooting Splunk itself but it's not a very useful field for searching real data in real life use cases.
Port 9997 is indeed a default port but not for receiving syslog data (for that you'd need to explicitly enable a tcp or udp input) but for splunk to splunk communication (like forwarding data from sp... See more...
Port 9997 is indeed a default port but not for receiving syslog data (for that you'd need to explicitly enable a tcp or udp input) but for splunk to splunk communication (like forwarding data from splunk forwarders to indexers). For a simple setup a direct tcp or udp, depending on what you use, input on your receiving indexer might be sufficient but it's recommended to use an external syslog receiver and either write to files and ingest those files with UF (the old way) or forward the data to HEC input (the new way).
Well, I can't tell you if it would have solved your problem because I have no idea if it was the same problem. It had the same symptoms but maybe the underlying cause was different. It could have sol... See more...
Well, I can't tell you if it would have solved your problem because I have no idea if it was the same problem. It had the same symptoms but maybe the underlying cause was different. It could have solved it if it was the same problem
Yeah maybe some others will chime in. The only thing I can think of is that the number of alerts that show up in Triggered Alerts would be different depending on which option ("Once" or "For each") y... See more...
Yeah maybe some others will chime in. The only thing I can think of is that the number of alerts that show up in Triggered Alerts would be different depending on which option ("Once" or "For each") you select. I saw this post which is sort of similar, but no one responded to it. 
In addition to what @ITWhisperer says, what is the physical significance of _time currently in your data?  Is there any reason why your ingestion should NOT use reported_date instead of whatever is u... See more...
In addition to what @ITWhisperer says, what is the physical significance of _time currently in your data?  Is there any reason why your ingestion should NOT use reported_date instead of whatever is used in current data?  That will make your requirement so much simpler to fulfill. (If this is a viable alternative, there could be other benefit, too.) This said, Splunk can always search records where reported_date falls within the last 15 months.  Here, I will illustrate with the following caveat: reported_date is always earlier than or equal to _time.  There can be other strategies to search if this condition is not true but unless that is a problem in your case, the following method is simpler. <your search criteria> earliest=-15mon | where relative_time(now(), "-15mon") < strptime(reported_date, "%F") ``` "%F" -> "%Y-%m-%d" ```  
Short answer is no. Events are timestamped by the _time field and earliest and latest applies to this field, not to some other field in the event. You would have to apply a time period (earliest and ... See more...
Short answer is no. Events are timestamped by the _time field and earliest and latest applies to this field, not to some other field in the event. You would have to apply a time period (earliest and latest) to your search to cover enough of your events to find events where reported_date is between the times you are interested in.
Hi All; I have list of events, which includes a field called reported_date, format is yyyy-mm-dd. I'm trying to create a search that looks for reported_date within the last 15 months of current day... See more...
Hi All; I have list of events, which includes a field called reported_date, format is yyyy-mm-dd. I'm trying to create a search that looks for reported_date within the last 15 months of current day.  Is it possible to do an earliest and latest search within a specific field? Note: _time does not align with the reported_date. Any assistance would be greatly appreciated! TIA
Hi @mobrien1 , maybe  "Once" and "For each result" became from Alerts. I don't find any other answer. let me know if I can help you more, or, please, accept one answer for the other people of Comm... See more...
Hi @mobrien1 , maybe  "Once" and "For each result" became from Alerts. I don't find any other answer. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello, be careful TA-WALLIX_Bastion/default/passwords.conf can crash Configuration page of its own settings/configuration page + other addons.