All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. What do you mean by "capture dataset"? 2. If you just do stats by _time without binning the _time first, you'll get a lot of results which will be uncomparable with anything.
You explicitly search for earliest=-30d so you're getting results from last 30 days.
So if I understand that correctly, all the typical config items applicable to inputs are now available at separate HEC tokens level, right?
I'll take a look, and thank you!
@dataisbeautiful  You should be able to control this by adding this line: display.visualizations.charting.chart = line to the following .conf file(s): Global: $SPLUNK_HOME/etc/syst... See more...
@dataisbeautiful  You should be able to control this by adding this line: display.visualizations.charting.chart = line to the following .conf file(s): Global: $SPLUNK_HOME/etc/system/local/ui-prefs.conf Per User: $SPLUNK_HOME/etc/users/<username>/system/local/ui-prefs.conf For more information see: https://docs.splunk.com/Documentation/Splunk/9.3.1/admin/Ui-prefsconf
When creating an incident for a specific server, we want to include a link to that entity in IT Essentials Work however the URL appears to only be accessible using the entity_key.    Is there any si... See more...
When creating an incident for a specific server, we want to include a link to that entity in IT Essentials Work however the URL appears to only be accessible using the entity_key.    Is there any simple way to get the URL directly to an entity from the hostname or is it required to get the entity_key from the kvstore itsi_entities then combine that into the url?    In  Splunk App for Infrastructure, you could simply use the host name in the URL, but I cannot find any way to do this with ITEW.   Example URL:  https://<stack>.splunkcloud.com/en-US/app/itsi/entity_detail?entity_key=82570f87-9544-47c8-bc6g-e030c522barb Looking to see if there's a way to do something like this:  https://<stack>.splunkcloud.com/en-US/app/itsi/entity_detail?host=<hostname>   
Thanks, @hrawat .  What tags are available?  Where can we find out more information about this feature?
Try setting DATETIME_CONFIG = in props.conf to disable the automatic timestamp extractor.
Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in... See more...
Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in a windows server via TIBCO Jobs. The files are generated in 9 windows throughout the day -  3 files at a time varying in size from a few Mb to up to 3Gb. The solution has worked fine in lower environments, likely, because of looser file/folder restrictions, but in PROD, only one or two files per window get ingested. The logs in indicate that Splunk can't open or read the files:   The running theory is that the process that is writing the files to the disk is locking them so Splunk can't read them.  I'm currently reviewing the permission sets for the TIBCO Service Account and the Local System Account (Splunk UF runs as this account) in the lower environments to try and spot any differences that could be causing the issue - based on the information in the post below: https://community.splunk.com/t5/All-Apps-and-Add-ons/windows-file-locking/m-p/14126 in addition to that, I was exploring the possibility of user the "monitornohandle" stanza as it seems to fit the use case I am dealing with - monitor single files that don't get updated frequently. But I haven't been able to determine, based on documentation, if I can use wildcards in the filename - for reference, this is the documentation I'm referring to: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Data/Monitorfilesanddirectorieswithinputs.conf#MonitorNoHandle.2C_single_Windows_file I'd appreciate if I could get any insights from the community either regarding permission or the use of the "monitornohandle" input stanza. Thanks in advance,
Hi, I'm struggling to get our security tools alerts (eg., Darktrace, Palo alto) to ES in notable events wherein our security analysts can go in look for all alerts and have a view of single pan of g... See more...
Hi, I'm struggling to get our security tools alerts (eg., Darktrace, Palo alto) to ES in notable events wherein our security analysts can go in look for all alerts and have a view of single pan of glass. Could you please assist me how you configured a correlation search to create notables when Darktrace alerts logged into Splunk? Many thanks in advance!
Looks like the issue was with "LINE_MERGE=TRUE" in the props.conf file. Thank you @PickleRick  and @yuanliu for chiming in.
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looki... See more...
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looking for a way to only show records from today/yesterday, instead of for the past 30 days.   Any help would be appreciated index=_audit action="alert_fired" earliest=-30d latest=now | eval date=strftime(_time, "%Y-%m-%d") | stats count AS actual_triggered_alerts by ss_name date | eventstats avg(actual_triggered_alerts) AS average_triggered_alerts by ss_name | eval average_triggered_alerts = round(average_triggered_alerts,0) | eval comparison = case( actual_triggered_alerts = average_triggered_alerts, "Average", actual_triggered_alerts > average_triggered_alerts, "Above Average", actual_triggered_alerts < average_triggered_alerts, "Below Average") | search comparison!="Average" | table date ss_name actual_triggered_alerts average_triggered_alerts | rename date as "Date", ss_name as "Alert Name", actual_triggered_alerts as "Actual Triggered Alerts", average_triggered_alerts as "Average Triggered Alerts"
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previ... See more...
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previously the DS only showed the clients it was actively connected with.  Did this feature get added in 9.2 when the DS was updated?
Why don't you try with macros and if, case statement? 
You know? You're right, I hadn't looked at it that way. Still don't like it.   Thanks.
Essentially, it is a matter of interpretation of the chart - it could be argued that the "space" between 15:00 and 16:00 represents the events in this time (hence the space in the chart graphic). You... See more...
Essentially, it is a matter of interpretation of the chart - it could be argued that the "space" between 15:00 and 16:00 represents the events in this time (hence the space in the chart graphic). You could use a column chart to show the space "occupied" with a graphic.
Well, the chart takes up the space needed for data points from 12:00 to 16:00, but since there isn't any data in the 16:00 bin the graphic stops at 15:00 and leaves a void where 15:00 to 16:00 would ... See more...
Well, the chart takes up the space needed for data points from 12:00 to 16:00, but since there isn't any data in the 16:00 bin the graphic stops at 15:00 and leaves a void where 15:00 to 16:00 would normally be (if you cut a chunk of time out of a larger graph, that is). That space is 1/4th of the time chart panel with a four-hour window. It's a third with a three-hour window. Is there any way to eliminate that void and stretch the chart across to fill the space?
This solution does not work, I am getting empty result. I think there is an issue and myInput variable is not passed in append. One more issue with this solution is that both the queries will be runn... See more...
This solution does not work, I am getting empty result. I think there is an issue and myInput variable is not passed in append. One more issue with this solution is that both the queries will be running but we know beforehand which query to run, so I am looking for some optimized solution where only 1 query is ran based on the filter.
The time range 12:00 to 16:00 is for timestamps greater than or equal to 12:00 and less than 16:00 i.e. you don't get times beginning 16:00, so you are getting what you asked for. When timestamps are... See more...
The time range 12:00 to 16:00 is for timestamps greater than or equal to 12:00 and less than 16:00 i.e. you don't get times beginning 16:00, so you are getting what you asked for. When timestamps are binned by the timechart command, all timestamps are taken back to the beginning time slot they are binned in. What would you expect to be in the 16:00 data point (as your earliest and latest values have not included any events beyond 16:00)?
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\... See more...
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s MAX_TIMESTAMP_LOOKAHEAD = 25 SHOULD_LINEMERGE = false TIME_FORMAT = %d-%b-%Y %H:%M:%S.%3N TIME_PREFIX = named\[.+\]\:\s TRUNCATE = 99999 TZ = US/Eastern I am trying to pull milliseconds from the log using the 2nd timestamp. <30>Oct 30 11:31:39 172.1.1.1 named[18422]: 30-Oct-2024 11:31:39.731 client 1.1.1.1#1111: view 10: UDP: query: 27b9eb69be0574d621235140cd164f.test.com IN A response: NOERROR +EDV 27b9eb69be0236356140cd164f.test.com. 30 IN CNAME waw-test.net.; waw-mvp.test.net. 10 IN A 41.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1;   I have this loaded on the indexers and search heads.   But it is still pulling from the first timestamp. A btool on the indexers shows this line that I have not configured.: DATETIME_CONFIG = /etc/datetime.xml   Is this what is screwing me up? Thank you!