All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Regarding Splunk Enterprise together with the Splunk Operator on Kubernetes. What would be the best way to disable the health probes so i can shutdown splunk and leave it shutted down? Having the h... See more...
Regarding Splunk Enterprise together with the Splunk Operator on Kubernetes. What would be the best way to disable the health probes so i can shutdown splunk and leave it shutted down? Having the health probes is very nice but comes down to a pain when doing maintenance on the single splunk pods. Any ideas?
HI SMEs,   I am having problem where logs coming from one of the syslog server are getting clubbed into one single raw event & not getting split. Sharing the below. Rather splitting it into 3 diff ... See more...
HI SMEs,   I am having problem where logs coming from one of the syslog server are getting clubbed into one single raw event & not getting split. Sharing the below. Rather splitting it into 3 diff events it is coming under one single event. Kindly suggest any possible work around   Apr 14 17:30:50 172.10.10.10 %ASA-2-106006: Deny inbound UDP from 10.20.30.40/51785 to 172.10.10.10/162 on interface AI-VO-PVT Apr 14 17:30:50 10.20.30.40 12812500: RP/0/RP0/CPU0:Apr 14 17:30:50.489 IST: ifmgr[301]: %PK-5-UPDOWN : Line protocol on Interface GigabitEthernet0/0/0/18, changed state to Down Apr 14 17:30:50 10.225.124.136 TMNX: 258900 Base LOGGER-MINOR-tmnxLogFileDeleted-2009 [acct-log-id 18 file-id 22]: Log file cf3:\acttt\actof1822-20240414-075.xml.gz on compact flash cf3 has been deleted Apr 14 17:30:50 10.20.30.40 12812502: RP/0/RP0/CPU0:Apr 14 17:30:50.493 IST: fia_driver[334]: %PLATFORM-2_FAULT : Interface GigabitEthernet0/0/0/18, Detected Local Fault
I have two fields (lets say.) AA and BB, I am trying to filter our results where AA and BB = 00 OR 10 using something like this - index="idx-some-index" sourcetype="dbx" source="some.*.source" | whe... See more...
I have two fields (lets say.) AA and BB, I am trying to filter our results where AA and BB = 00 OR 10 using something like this - index="idx-some-index" sourcetype="dbx" source="some.*.source" | where (AA AND BB)== (00 OR 10) But I am getting error as Error in 'where' command: Type checking failed. 'AND' only takes boolean arguments. I have also tried - index="idx-some-index" sourcetype="dbx" source="some.*.source" | where AA =(00 OR 10) AND (BB=(OO OR 10)) But I am getting same error as Type checking failed. 'OR' only takes boolean arguments.   Please help!
Hi Team,   We actually want to send AWS Guard Duty logs to Splunk Cloud so what is the procedure to get it achieved since earlier we had an option i.e. Amazon GuardDuty Add-on for Splunk (https://s... See more...
Hi Team,   We actually want to send AWS Guard Duty logs to Splunk Cloud so what is the procedure to get it achieved since earlier we had an option i.e. Amazon GuardDuty Add-on for Splunk (https://splunkbase.splunk.com/app/3790) which is currently archived so do we have any add-on or app to collect the events and onboard the logs to Splunk. So kindly help to check and update on the same. 
We have integrated the AWS guard duty logs into Splunk through the S3 bucket.  Recently, we have noticed this error in our health check The file extension fileloaction.jsonl is not in a delimited f... See more...
We have integrated the AWS guard duty logs into Splunk through the S3 bucket.  Recently, we have noticed this error in our health check The file extension fileloaction.jsonl is not in a delimited file format.   Suggest me, how I can resolve this?  
Hello, I have multiple dashboards where i user loadjob command since it is very useful to recycle big search results that need to be manipulated in other searches. I know that unscheduled jobs have... See more...
Hello, I have multiple dashboards where i user loadjob command since it is very useful to recycle big search results that need to be manipulated in other searches. I know that unscheduled jobs have a default TTL of 10 minutes and I don't want to change their default TTL globally in the system. Why I'm asking this question: if a User access a dashboard with the loadjob command in the code for example at 10AM and then he comes back to the dashboard at 10:15AM I don't want him to see a message about the non existence of the job (like 'Cannot load artifact' or something like this). To prevent this, I want to create a javascript that recovers the job Id that the User search generated and every 9 minutes increments its TTL by 10 minutes. So the javascript will have a loop where it will wait 9 minutes and change the job id TTL over and over again until the User closes the dashboard (only at that moment I'll let the job expire after 10 minutes so I don't overload the disk). So, the question is: if I have a job id X, how can I increment only its TTL by adding 10 minutes to it? Is there an SPL command? And how can I launch it from a Javascript? Thank you very much to anyone who'll help!
Hi Folks, Tried configuring send http request with providing required fields in Splunk Cloud. The request is not send to destination. Can anyone help me on this. Regards Sham
Hi all, we collect some json data from a logfile with a universal forwarder. Most times the events were indexed correctly with already extracted fields, but for a few events the fields are not auto... See more...
Hi all, we collect some json data from a logfile with a universal forwarder. Most times the events were indexed correctly with already extracted fields, but for a few events the fields are not automatically extracted.  If i reindex the same events the indexed extraction is also fine. I did not find any entries in splunkd.log that it is not working. Following props.conf is on the Universal fowarder and Heavy Forwarder (maybe someone could explain which parameter is needed on UF and which on HF): [svbz_swapp_task_activity_log] CHARSET=UTF-8 SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true INDEXED_EXTRACTIONS=json KV_MODE=none category=Custom disabled=false pulldown_type=true TIMESTAMP_FIELDS=date_millis TIME_FORMAT=%s%3N following props.conf is on the Searchhead: [svbz_swapp_task_activity_log] KV_MODE=none The first time when it was indexed automatically it looks like:   When i reindex the same Event again to another index it looks fine: In last 7 days it was working correctly for about 32000 event but for 168 events the automatic field extraction was not working. Here is also the example event: {"task_id": 100562, "date_millis": 1713475816310, "year": 2024, "month": 4, "day": 18, "hour": 23, "minute": 30, "second": 16, "action": "start", "step_name": "XXX", "status": "started", "username": "system", "organization": "XXX", "workflow_id": 14909, "workflow_scheme_name": "XXX", "workflow_status": "started", "workflow_date_started": 1713332220965, "workflow_date_finished": null, "escalation_level": 0, "entry_attribute_1": 1711753200000, "entry_attribute_2": "manual_upload", "entry_attribute_3": 226027, "entry_attribute_4": null, "entry_attribute_5": null} Does someone have an idea why it is sometimes working and sometimes not? When i would now change the KV_Mode on search head the fields are shown correctly for these 168 events but for all others the fields are extracted twice. Using spath with same names would extract it only once. What is the best workaround for already indexed events to get proper search results. Thanks and kind regards Kathrin
Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not ... See more...
Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not taken into account by the regex. regex: ^.+?devid=\"?F(?:G|W|\dK).+?(?:\s |\,|\,\s)type=\"?(traffic|utm|event|anomaly) From the stanza: [force_sourcetype_fortigate] We updated it on our side, but is this behavior normal? Thanks in advance, Best regards.
Hello Team, I would like to get clarified whether there is a possibility of ingesting application prometheus metrics onto Splunk Enterprise through Universal or Heavy Forwarders. Currently we are a... See more...
Hello Team, I would like to get clarified whether there is a possibility of ingesting application prometheus metrics onto Splunk Enterprise through Universal or Heavy Forwarders. Currently we are able to ingest Prometheus metrics through Splunk Otel Collector & Splunk HEC onto splunk Enterprise. Is there a similar solution using Forwarders? Kindly please suggest. Additionally can we also confirm if Splunk Otel Collector + Fluentd agent is available only as open-source agents?
Hi , I came across many queries to calculate daily ingest per index for last 7 days but I am not getting the expected results.   Can you please guide me with the query to calculate the daily ingest... See more...
Hi , I came across many queries to calculate daily ingest per index for last 7 days but I am not getting the expected results.   Can you please guide me with the query to calculate the daily ingest per index in GB for last 7 days?
I want to show lookup file content horizontally. eg:- rather than this panels a b c I want panels a b c    OR         a b c
index=abc host IN () | stats max(response_time) as "Maximum Response Time" by URL | sort - "Maximum Response Time" I need to add the respective time for the maximum response time along with the st... See more...
index=abc host IN () | stats max(response_time) as "Maximum Response Time" by URL | sort - "Maximum Response Time" I need to add the respective time for the maximum response time along with the stats. Coud you please help
Hi,  Currently, we are running with Splunk Enterprises and Universal Forwarder with 9.0 Version but now we need to upgrade it to latest version.  Is it possible if the versions for Splunk Enterpris... See more...
Hi,  Currently, we are running with Splunk Enterprises and Universal Forwarder with 9.0 Version but now we need to upgrade it to latest version.  Is it possible if the versions for Splunk Enterprises & UF could be different like Splunk Enterprises on 9.1 and UF on 9.0? or both should be on same version? From 9.0 version, should we go with 9.1 or 9.2 version?  Thanks in advance for your kind advice and guidance on it. -AK
Good morning, I am currently instructing the Cluster Admin course, and a student has asked a question which to my great surprise doesn't seem to covered anywhere. They have an indexer cluster and S... See more...
Good morning, I am currently instructing the Cluster Admin course, and a student has asked a question which to my great surprise doesn't seem to covered anywhere. They have an indexer cluster and SHC on a single site, and they want to shut down everything for a planned power outage in their data centre.   What is the correct sequence and commands for doing this? My own guesses are: Shut down everything that is sending data to splunk first. Place the index cluster in maintenance mode Shut down the deployment server if in use. Shutdown the SHC deployer (splunk stop) Shut down the SHC members (splunk stop?) Shut down the indexer members (? not sure which variant of the commands to use here) Shut down the cluster master last. Restart is the reverse order. Correct or not? Thank you, Charles
Hi, I am facing a executable permission issue for the few scripts for a splunk app and seeing these errors on various search heads, what is the best way to fix it? can someone help me with the scrip... See more...
Hi, I am facing a executable permission issue for the few scripts for a splunk app and seeing these errors on various search heads, what is the best way to fix it? can someone help me with the script or a fix if you ever come across?   thanks in advance.
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on ... See more...
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on an EC2 instance in the particular AWS environment. I've read a lot of documents, tried different things, followed a couple of videos, and I'm able to see cloudwatch configuration entries in my main index, but so far have not gotten any cloudwatch logs. I am not interested in deep architectural understanding.  I just want to start from the very beginning at the true step one, and end at the last step with logs showing up in my main index. Also, the community "ask a question" page requires an "associated Apps" and I picked one from the available list, but I don't care which app works, I just want to use the one that works. Thank you very much in advance.
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we ... See more...
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we have data for Login but no data for other applications so it automatically filled zero values,  but for rejectedcount, trmpcount, topiccount there is no data for any application  0 value is not getting filled up. Application incomingcount rejectedcount invalidcount topcount trmpcount topiccount Login 1   2 5     Success 8   0 2     Error 0   0 10     logout 2   0 4     Debug 0   0 22     error-state 0   0 45     normal-state 0   0 24      
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted... See more...
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted by each combination of thirdPartyId and hashcode and then compare events line by line to see if the previous line and current happened within 100ms. How should the query look like? | makeresults format=csv data="startTS,thirdPartyId,hashCode,accountNumber 2024-04-16 21:53:02.455-04:00,AAAAAAAA,00000001,11111111 2024-04-16 21:53:02.550-04:00,AAAAAAAA,00000001,11112222 2024-04-16 21:53:02.650-04:00,BBBBBBBB,00001230,22222222 2024-04-16 21:53:02.650-04:00,CCCCCCCC,00000002,12121212 2024-04-16 21:53:02.730-04:00,DDDDDDDD,00000005,33333333 2024-04-16 21:53:02.830-04:00,DDDDDDDD,00000005,33334444 2024-04-16 21:53:02.670-04:00,BBBBBBBB,00000002,12121212 2024-04-16 21:53:02.700-04:00,CCCCCCCC,00000002,21212121" | sort by startTS, thirdPartyId
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 o... See more...
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 of them will fail.  Here is how it looks in the audit logs. 04-22-2024 21:30:31.964 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.964, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:31.986 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.986, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.384 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.384, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.395 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.395, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.687 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.687, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.694 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.694, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.803 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.803, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.815 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.815, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.526 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.526, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.542 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.542, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:55.317 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:55.317, user=splunk-system-user, action=Remote token requested, info=failed] My problem is I can't do much more with this information.  I have no notion of where these requests are coming from since no other information is included here.  Is there anything else I can investigate?  The number 11 doesn't seem to line up with anything I can think of either, there are 3 searchheads, 3 indexers, 1 cluster manager, in this particular deployment.  Not sure where the 11 requests is coming from.