All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@poojabolla Hi Pooja, the index names on the source (open shift) and destination (Splunk) should match. Therefore, the index should only contain the data. Open shift logs should not be sent to Splunk... See more...
@poojabolla Hi Pooja, the index names on the source (open shift) and destination (Splunk) should match. Therefore, the index should only contain the data. Open shift logs should not be sent to Splunk if the index name differs.
Yes, I have a summary search which ends in: | eval _time = _time + 3600 This set the timestamp of the summary-indexed events to one hour in the future. Then, when I search for the summarized even... See more...
Yes, I have a summary search which ends in: | eval _time = _time + 3600 This set the timestamp of the summary-indexed events to one hour in the future. Then, when I search for the summarized events using the time filter of "Last 24 hours", it does not find any events (as expected). When I search for the summarized events with a custom time filter from +1m to +2h, then it does find the events, timestamped one hour in the future. Thus this method should be useful for setting the timestamps of your summarized index events to be in your expected search window.
The question could be worded better. You could ask your instructor if they want you to "estimate how many users, in each user category, have performed a successful authentication." If that is the qu... See more...
The question could be worded better. You could ask your instructor if they want you to "estimate how many users, in each user category, have performed a successful authentication." If that is the question being asked, then you could filter to only successful authentications and then use the statistics commands in Splunk to produce a table counting how many different users in each category have successfully authenticated.
Hi @Ryan.Paredez , Unfortunately, I have not found any resolution for my issue just yet and it is not clear in the documentation how to use: dbagent.mssql.cluster.discovery.enabled from the Cont... See more...
Hi @Ryan.Paredez , Unfortunately, I have not found any resolution for my issue just yet and it is not clear in the documentation how to use: dbagent.mssql.cluster.discovery.enabled from the Controller UI.  Thank  you, Osama
The typical case of disappearing users is connected to the issue of grantable roles. See https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Enterprise-upgrade-to-9-1-0-1-all-users-disappeared/... See more...
The typical case of disappearing users is connected to the issue of grantable roles. See https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Enterprise-upgrade-to-9-1-0-1-all-users-disappeared/m-p/650181
Yep. tstats and timechart is a good way to go about finding out if you have some significant changes in your event counts (as opposed to normal daily/weekly variance). You can draw yourself a nice li... See more...
Yep. tstats and timechart is a good way to go about finding out if you have some significant changes in your event counts (as opposed to normal daily/weekly variance). You can draw yourself a nice line/bar chart and easily see visually if your event rates are changing. You can use different aggregations to investigate it further (by sourcetype, by source, by host...). Typically event size distribution should not change much unless there has been some change on the source's side (but if you have many different sources, such change on just one or two sources would not reflect much on overall data rate unless of course you have a single source "dominating" in your data).
Firstly - what do you mean by "structured" here. If you mean INDEXED_EXTRACTIONS, the situation is getting complicated because UF does the parsing and the event is not touched after that (except for ... See more...
Firstly - what do you mean by "structured" here. If you mean INDEXED_EXTRACTIONS, the situation is getting complicated because UF does the parsing and the event is not touched after that (except for ingest actions) If you just mean a well-known and well-formed events, you could try enabling force_local_processing on your UF force_local_processing = <boolean> * Forces a universal forwarder to process all data tagged with this sourcetype locally before forwarding it to the indexers. * Data with this sourcetype is processed by the linebreaker, aggerator, and the regexreplacement processors in addition to the existing utf8 processor. * Note that switching this property potentially increases the cpu and memory consumption of the forwarder. * Applicable only on a universal forwarder. * Default: false  It' s worth noting though that it's not a recommended setting and it not widely used so you can get problems finding support in case anything goes wrong.
I did a test on my local UF. It resulted with: 1) splunkd process opening hundreds of files (verifiable by "ls -la /proc/<splunk_pid>/fd") 2) Huge number of entries like: 03-09-2024 09:34:05.449 ... See more...
I did a test on my local UF. It resulted with: 1) splunkd process opening hundreds of files (verifiable by "ls -la /proc/<splunk_pid>/fd") 2) Huge number of entries like: 03-09-2024 09:34:05.449 +0100 WARN FileClassifierManager [7610 tailreader0] - The file '/usr/bin/mariadb-import' is invalid. Reason: binary. 03-09-2024 09:34:05.449 +0100 INFO TailReader [7610 tailreader0] - Ignoring file '/usr/bin/mariadb-import' due to: binary In splunkd.log So splunkd is trying to read the files but doesn't ingest them due to the files being binary and then apparently gives up due to exhausting opened FDs limit (default - 100).
I’m using Splunk Enterprise 9 with Universal Forwarder 9 on Windows. I'd like to monitor several structured log files but only ingest specific lines from these files (basically each line begins with ... See more...
I’m using Splunk Enterprise 9 with Universal Forwarder 9 on Windows. I'd like to monitor several structured log files but only ingest specific lines from these files (basically each line begins with a well-defined string so easy to create matching regular expression or simple match against it). I’m wondering where this can be achieved? Q: Can the UF do this natively or do I need to monitor the file as a whole then drop certain lines at the indexer?
It sounds like you are doing everything right. I freshly installed the CIM Validator v 1.8.2 on Splunk Enterprise v9.2.0 with the Splunk CIM app v5.3.1, then without configuring any of these apps I e... See more...
It sounds like you are doing everything right. I freshly installed the CIM Validator v 1.8.2 on Splunk Enterprise v9.2.0 with the Splunk CIM app v5.3.1, then without configuring any of these apps I entered "index=_internal" into the search bar of the CIM Validator app dashboard and then I could see results. It is very strange that you are able to retrieve logs from regular Splunk searches, but then get no results when you copy the search query into the CIM Validator and set the time range to be equivalent. At the very bottom of the CIM Validator dashboard, there is a table labeled "Events". You can click the magnifying glass on this table and it will open a search containing your search query followed by "| head 10000". As this is a very straightforward search, you should be able to see events in this table once you enter them into the "Search" input of the CIM dashboard.
I have a fresh install of Splunk 9.2. I navigated to the /opt/splunk/bin folder where the python binaries are stored. Upon running ./python --version, I get: Python 3.7.17
This is a great description of your reproduction method.  More clarifications are needed, though.  This "Event log field", it must be a custom field specific to the data source you are looking at.  I... See more...
This is a great description of your reproduction method.  More clarifications are needed, though.  This "Event log field", it must be a custom field specific to the data source you are looking at.  Is this correct? (Splunk can be used for all kinds of data.  "Event", or "Event log" is not an inherent field name.)  I also assume that the behavior is observed in "Smart Mode" or "Verbose Mode". After trying your sequence with index=_internal, I have some educated guess what you are trying to solve.  Rest assured that "Event" or "Event log" field in your data is not "always blank".  But this specific field is indeed populated in fewer than 1% of total events in your search period.  In other words, the field always exists in events that have populated it.  But Splunk won't list the field name in the left-hand side, either "Selected Fields" or "Interesting Fields". This is normal behavior that saves analysts from being overwhelmed by too many fields.  I suggest the following exercise to familiarize yourself with Splunk's search UI.  Instead of "Select All Within Filter", just scroll to the field of interest, say "Event".  I predict that you will see "Event Coverage" < 1%.  Click the corresponding checkbox, then return to search window.  Now you will see "Event" under "Selected Fields". Indeed Splunk doesn't save this in user preference.  You do not even need to restart browser.  Just click "! Smart Mode", change to "Fast Mode", then change back to "Smart Mode".  Your selection is gone. So, what to do?  As said above, nothing is lost.  If you search for events in which this field is populated, those events will still show.  Try this: <your source specs> Event=* Now Splunk will automatically show "Event" field under "Interesting Fields". (Because when you do this, this field has 100% event coverage.) If you perform other calculations with this field, even as its event coverage is <1%, Splunk will still get it.  Try this: <your source specs> | stats count(Event) as EventCount count It will still display how many events has Event field populated; you can see how this population compares with your event space.  You can also try this exercise: <your source specs> | eval does_Event_exist = "yes, yes, yes!! " . Event | stats values(Event) as EventValue values(does_Event_exist) as does_Event_exist count(does_Event_exist) as calculatedCount count(Event) as EventCount You should get calculatedCount and EventCount equal to each other even though you are counting different fields; field EventValue should show you all values in field Event during the search period; and field does_Event_exist should show you "yes, yes, yes!! " prepended to each value of EventValue. Hope this helps.
The CMC may have dashboard panels showing ingestion by index and/or sourcetype.  You can get similar information yourself with a search like this.  It counts events rather than TB for better performa... See more...
The CMC may have dashboard panels showing ingestion by index and/or sourcetype.  You can get similar information yourself with a search like this.  It counts events rather than TB for better performance, but should offer good insights. | tstats prestats=t count where index=foo sourcetype=bar by _time span=1d | timechart span=1d count
That's my interpretation of what's in the manual.
Hello Splunk Community, I'm encountering an issue with the SA-cim_validator add-on where it's returning no results, and I'm hoping someone here can help me troubleshoot this further. Here's what I'... See more...
Hello Splunk Community, I'm encountering an issue with the SA-cim_validator add-on where it's returning no results, and I'm hoping someone here can help me troubleshoot this further. Here's what I've done so far: Confirmed that the app has read access for all users and write access for admin roles. Checked that the configuration files are correctly set up. Splunk Common Information Model (Splunk_SA_CIM) is installed and up to date. Verified that the indexes and sourcetypes specified in the queries are present and contain data. Reviewed time ranges to include periods with log generation. Ensured that data models are accelerated as needed. Looked through Splunk's internal logs for any errors related to the SA-cim_validator but found nothing. Despite these steps, every time I run a search query within the CIM Validator, such as index=fortigate sourcetype=fortigate_utm, it yields no results, regardless of the indexes or targeted data model or search parameters I use. Does anyone have any insights or suggestions on what else I can check or any known issues with the add-on? Any assistance would be greatly appreciated! Thank you, Alex_Mics
Looking for the solution. Would you mind if you resolve this issue, getting  Azure applciaion log to On-prem Splunk
@bowesmana  that worked thank you
Sweet, I was probably typing (got distracted) when you were posting.  Glad we had the same answer. 
Hello, Have you tested it yourself? I have tried your suggestion, but it did not work. By default,  _time was set to info_min_time Thanks, Marius
I have some ideas about this. First, you'll need your dates/times in datetime format, convertible to unix epoch time.  Here's my mockup that I'm working with - run this and take a close look at the... See more...
I have some ideas about this. First, you'll need your dates/times in datetime format, convertible to unix epoch time.  Here's my mockup that I'm working with - run this and take a close look at the output. | makeresults format=csv data="time_start, time_end, person 2024-03-07T07:00:00-0600, 2024-03-07T11:00:00-0600, Rich 2024-03-08T07:30:00-0600, 2024-03-08T15:00:00-0600, Rich" | eval _time = strptime(time_start, "%Y-%m-%dT%H:%M:%S%Z"), time_end_unix = strptime(time_end, "%Y-%m-%dT%H:%M:%S%Z") | append [| makeresults format=csv data="incident_time, incident 2024-03-07T06:50:00-0600, blizzard 2024-03-07T11:23:00-0600, hurricane 2024-03-08T13:44:00-0600, tornado 2024-03-08T18:03:00-0600, dust_storm" | eval _time = strptime(incident_time, "%Y-%m-%dT%H:%M:%S%Z") ] I'm just making it up though - hopefully your search will look more like (index=timetracking (sourcetype="user:clockin" OR sourcetype="user:clockout") OR index=incidents) | eval end_time_unix = strptime(<my time format string>, <my end time field>)   You'll need to install the Timeline visualization from https://splunkbase.splunk.com/app/3120 Now that we have that data, the Time viz requires a specific format of data, so let's do a little work... You need a duration, right?  (And it has to be in milliseconds, so in most sane environments you'll just want to multiply the "seconds" by 1000).  So to the end of the above search, add this: | eval duration = if(time_end_unix>0, (time_end_unix - _time) * 1000, 0) Now, take a look at those results.  The important fields are _time, duration, and incident and person. But incident and person are two different things and the visualization won't like that a lot. So let's put them together.  Add to the end of *that* ... | eval item = coalesce(person, incident) Now, added to the output is "item" with either "Rich" or some random weather event. Good enough!  At least for pass #1.  Let's add a table command to the end of *that*... | table _time, item, duration The click the visualization tab, change it to the Timeline, and ... well it's close!  Unfortunately, I don't like the blizzard, hurricane, tornado and dust_storm all being on their own lines like in this screenshot. How can we fix that?  Well, one way would be to make all the incident's "names" just be "incident" so it'd show up on its own line.  To do that, change the "eval item =..." line to | eval item = if(isnotnull(person), person, "incident") and leave the ending | table command.  Now it looks like this I honestly think that as long as you may have multiple people's schedules, this is as good as you can get.  Example with another person added in: But if you absolutely know there will never be another person, you could make them sit on top of each other by any mechanism to just make "person" always be "Rich".  Here I use a fillnull command   Anyway, my suggestion is to play with my examples until you understand them. Once you do, you might be able to make your own data work like this.  But if you still have difficulties after all that, post back and we'll see if we can help! Also it's entirely possible someone else will come up with a completely different type of answer.  I don't know, it'll be interesting if they do - there's a lot of smart folks around here and we don't all think alike! Happy Splunking, Rich