All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in... See more...
Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in a windows server via TIBCO Jobs. The files are generated in 9 windows throughout the day -  3 files at a time varying in size from a few Mb to up to 3Gb. The solution has worked fine in lower environments, likely, because of looser file/folder restrictions, but in PROD, only one or two files per window get ingested. The logs in indicate that Splunk can't open or read the files:   The running theory is that the process that is writing the files to the disk is locking them so Splunk can't read them.  I'm currently reviewing the permission sets for the TIBCO Service Account and the Local System Account (Splunk UF runs as this account) in the lower environments to try and spot any differences that could be causing the issue - based on the information in the post below: https://community.splunk.com/t5/All-Apps-and-Add-ons/windows-file-locking/m-p/14126 in addition to that, I was exploring the possibility of user the "monitornohandle" stanza as it seems to fit the use case I am dealing with - monitor single files that don't get updated frequently. But I haven't been able to determine, based on documentation, if I can use wildcards in the filename - for reference, this is the documentation I'm referring to: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Data/Monitorfilesanddirectorieswithinputs.conf#MonitorNoHandle.2C_single_Windows_file I'd appreciate if I could get any insights from the community either regarding permission or the use of the "monitornohandle" input stanza. Thanks in advance,
Hi, I'm struggling to get our security tools alerts (eg., Darktrace, Palo alto) to ES in notable events wherein our security analysts can go in look for all alerts and have a view of single pan of g... See more...
Hi, I'm struggling to get our security tools alerts (eg., Darktrace, Palo alto) to ES in notable events wherein our security analysts can go in look for all alerts and have a view of single pan of glass. Could you please assist me how you configured a correlation search to create notables when Darktrace alerts logged into Splunk? Many thanks in advance!
Looks like the issue was with "LINE_MERGE=TRUE" in the props.conf file. Thank you @PickleRick  and @yuanliu for chiming in.
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looki... See more...
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looking for a way to only show records from today/yesterday, instead of for the past 30 days.   Any help would be appreciated index=_audit action="alert_fired" earliest=-30d latest=now | eval date=strftime(_time, "%Y-%m-%d") | stats count AS actual_triggered_alerts by ss_name date | eventstats avg(actual_triggered_alerts) AS average_triggered_alerts by ss_name | eval average_triggered_alerts = round(average_triggered_alerts,0) | eval comparison = case( actual_triggered_alerts = average_triggered_alerts, "Average", actual_triggered_alerts > average_triggered_alerts, "Above Average", actual_triggered_alerts < average_triggered_alerts, "Below Average") | search comparison!="Average" | table date ss_name actual_triggered_alerts average_triggered_alerts | rename date as "Date", ss_name as "Alert Name", actual_triggered_alerts as "Actual Triggered Alerts", average_triggered_alerts as "Average Triggered Alerts"
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previ... See more...
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previously the DS only showed the clients it was actively connected with.  Did this feature get added in 9.2 when the DS was updated?
Why don't you try with macros and if, case statement? 
You know? You're right, I hadn't looked at it that way. Still don't like it.   Thanks.
Essentially, it is a matter of interpretation of the chart - it could be argued that the "space" between 15:00 and 16:00 represents the events in this time (hence the space in the chart graphic). You... See more...
Essentially, it is a matter of interpretation of the chart - it could be argued that the "space" between 15:00 and 16:00 represents the events in this time (hence the space in the chart graphic). You could use a column chart to show the space "occupied" with a graphic.
Well, the chart takes up the space needed for data points from 12:00 to 16:00, but since there isn't any data in the 16:00 bin the graphic stops at 15:00 and leaves a void where 15:00 to 16:00 would ... See more...
Well, the chart takes up the space needed for data points from 12:00 to 16:00, but since there isn't any data in the 16:00 bin the graphic stops at 15:00 and leaves a void where 15:00 to 16:00 would normally be (if you cut a chunk of time out of a larger graph, that is). That space is 1/4th of the time chart panel with a four-hour window. It's a third with a three-hour window. Is there any way to eliminate that void and stretch the chart across to fill the space?
This solution does not work, I am getting empty result. I think there is an issue and myInput variable is not passed in append. One more issue with this solution is that both the queries will be runn... See more...
This solution does not work, I am getting empty result. I think there is an issue and myInput variable is not passed in append. One more issue with this solution is that both the queries will be running but we know beforehand which query to run, so I am looking for some optimized solution where only 1 query is ran based on the filter.
The time range 12:00 to 16:00 is for timestamps greater than or equal to 12:00 and less than 16:00 i.e. you don't get times beginning 16:00, so you are getting what you asked for. When timestamps are... See more...
The time range 12:00 to 16:00 is for timestamps greater than or equal to 12:00 and less than 16:00 i.e. you don't get times beginning 16:00, so you are getting what you asked for. When timestamps are binned by the timechart command, all timestamps are taken back to the beginning time slot they are binned in. What would you expect to be in the 16:00 data point (as your earliest and latest values have not included any events beyond 16:00)?
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\... See more...
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s MAX_TIMESTAMP_LOOKAHEAD = 25 SHOULD_LINEMERGE = false TIME_FORMAT = %d-%b-%Y %H:%M:%S.%3N TIME_PREFIX = named\[.+\]\:\s TRUNCATE = 99999 TZ = US/Eastern I am trying to pull milliseconds from the log using the 2nd timestamp. <30>Oct 30 11:31:39 172.1.1.1 named[18422]: 30-Oct-2024 11:31:39.731 client 1.1.1.1#1111: view 10: UDP: query: 27b9eb69be0574d621235140cd164f.test.com IN A response: NOERROR +EDV 27b9eb69be0236356140cd164f.test.com. 30 IN CNAME waw-test.net.; waw-mvp.test.net. 10 IN A 41.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1;   I have this loaded on the indexers and search heads.   But it is still pulling from the first timestamp. A btool on the indexers shows this line that I have not configured.: DATETIME_CONFIG = /etc/datetime.xml   Is this what is screwing me up? Thank you!
| makeresults | eval myInput="*" | append [ | search "my search related to query 1" | rex field=_raw "Job id : (?<job_id>[^,]+)" | eval query_type=if(myInput="*", "query1", null()) |... See more...
| makeresults | eval myInput="*" | append [ | search "my search related to query 1" | rex field=_raw "Job id : (?<job_id>[^,]+)" | eval query_type=if(myInput="*", "query1", null()) | where query_type="query1" | table job_id, query_type, myInput ] | append [ | search "my search related to query 2" | rex field=_raw "Job id : (?<job_id>[^,]+)" | eval query_type=if(myInput!="*", "query2", null()) | where query_type="query2" | table job_id, query_type, myInput ]
I have not had the issue where I can't see data @Strangertinz    Pravin
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window... See more...
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window of four hours with a span of 1h, I get a total of four data points: 12:00:00 13:00:00 14:00:00 15:00:00 I didn't ask for four data points, I asked for the data points from 12:00 to 16:00. And in this particular example, no, 16:00 isn't a time that hasn't arrived yet or only has partial data; it does this with any time range I pick, at any span setting. Now, I can work around this by programming the dashboard to add 1 second to the <latest> time for the time range. Not that huge of a deal. However, I'm left with a large void on the right-hand side of the time range. Is there anyway I can fix this, either by forcing the timechart to show me the whole range or by hiding the empty range?
Hi @_pravin you can correct the issue by going to latest version of db_connect assuming you are running 9.2.x splunk.  I am still dealing with the issue of the latest version not seeing data se... See more...
Hi @_pravin you can correct the issue by going to latest version of db_connect assuming you are running 9.2.x splunk.  I am still dealing with the issue of the latest version not seeing data send from my db_connect.  Has this ever happened to you and how did you resolve it? I cant find any error log to point to the main issue 
Thank you, but I was wanting to learn where the random text "THE_TERM" comes from and how it gets into the query.
I have two query in splunk query 1 and query 2 and an input. Based on the input, i need to execute either query 1 or query 2. I am trying something like below query but it is not working for me. ... See more...
I have two query in splunk query 1 and query 2 and an input. Based on the input, i need to execute either query 1 or query 2. I am trying something like below query but it is not working for me.   | makeresults | eval myInput="*" | append [ search "my search related to query 1" | rex field=_raw "Job id : (?<job_id>[^,]+)" | where myInput="*" | eval query_type="query1" | table job_id, query_type, myInput ] | append [ search "my search related to query 2" | rex field=_raw "Job id : (?<job_id>[^,]+)" | where myInput!="*" | eval query_type="query2" | table job_id, query_type, myInput ]  
Splunk version 9.0.8/9.1.3/9.2.x and above has added capability to process key value pairs that will be added at index time on all events flowing through the input.  Now it's possible to "tag" a... See more...
Splunk version 9.0.8/9.1.3/9.2.x and above has added capability to process key value pairs that will be added at index time on all events flowing through the input.  Now it's possible to "tag" all data coming into a particular HEC token. HEC will support all present and future inputs.conf.spec configs(_meta/TCP_ROUTING/SYSLOG_ROUTING/queue etc.).
Siga os comandos abaixo no linux, compare com os servidores existentes, e aplique o timezone desejado # MUDAR TIMEZONE DO SERVICOR via linux # identifica timezone do servidor timedatectl # lista... See more...
Siga os comandos abaixo no linux, compare com os servidores existentes, e aplique o timezone desejado # MUDAR TIMEZONE DO SERVICOR via linux # identifica timezone do servidor timedatectl # lista os timezones timedatectl list-timezones # configura o timezone desejado sudo timedatectl set-timezone America/Sao_Paulo