All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi   We are trying to integrate the data which is on Splunk to ELK, Using Heavy forwarder can anyone suggest how inputs.conf can be configured so that it listens to the data which is on search head... See more...
Hi   We are trying to integrate the data which is on Splunk to ELK, Using Heavy forwarder can anyone suggest how inputs.conf can be configured so that it listens to the data which is on search head and then using outputs.conf we can send the data to ELK via stash   Thanks 
Regarding Splunk Enterprise together with the Splunk Operator on Kubernetes. What would be the best way to disable the health probes so i can shutdown splunk and leave it shutted down? Having the h... See more...
Regarding Splunk Enterprise together with the Splunk Operator on Kubernetes. What would be the best way to disable the health probes so i can shutdown splunk and leave it shutted down? Having the health probes is very nice but comes down to a pain when doing maintenance on the single splunk pods. Any ideas?
Hi @Splunkerninja, You can use below query; index=_internal source=*license_usage.log* type="Usage" | timechart span=1d eval(round(sum(b)/1024/1024/1024,3)) as GB by idx  
@scelikok Yes I tried out with - |where AA IN ('00','10') AND BB IN ('00','10') But it was not giving any output, but the second one did worked :0   Thanks
The main idea is to have a graceful shutdown/start and run the necessary commands for the cluster’s, had a look at your high-level steps yours looks OK, I did something a while ago, in terms of stopp... See more...
The main idea is to have a graceful shutdown/start and run the necessary commands for the cluster’s, had a look at your high-level steps yours looks OK, I did something a while ago, in terms of stopping a Splunk cluster environment and bring them up. By shutting down the data forwarding tier first is a good idea , otherwise the data will be lost nowhere to go. Place the CM in maintenance mode Shutdown Deployment Server / HF ‘s if in use as well Shutdown SHC - take note of the SHC Captain – Stop the SHC Members and Captain Last, make sure they are down. Shutdown Deployer. As the CM should be in maintenance mode  via CM, shutdown shut down the indexers by the way of the normal commands should be fine(/opt/splunk/bin/stop), one at a time and make sure they are down. Shutdown the CM. On the reverse make sure CM  is up and it’s still in maintenance mode, bring all the indexers up and when they are all up  disable the maintenance mode – check status using MC, the replication factors should searchable be green status, so you may have to wait a bit.   Bring back the Deployer back up Then bring the SH's up one by one, ensure the captain is up first, then the other SHC members  and check the others can communicate with it, using SHC clusters commands to check status Bring back the Deployment Server / HF’s Bring back the data forwarding tier. Use the MC to check overall health. I would document all the steps and commands clearly, so you have a process to follow and checkpoint, rather than in an ad-hoc manner due to the many moving parts.   
Hi @man03359, You can use the below syntax; index="idx-some-index" sourcetype="dbx" source="some.*.source" AA IN (00,10) BB IN (00,10)   or index="idx-some-index" sourcetype="dbx" source="some.*.... See more...
Hi @man03359, You can use the below syntax; index="idx-some-index" sourcetype="dbx" source="some.*.source" AA IN (00,10) BB IN (00,10)   or index="idx-some-index" sourcetype="dbx" source="some.*.source" (AA=00 OR AA=10) (BB=00 OR BB=10)  
Nevermind!  I was able to get the desired output by using - | where (AA ="00" OR AA="10") OR (BB="00" OR BB="10")
Try | tstats count where index=sm
HI SMEs,   I am having problem where logs coming from one of the syslog server are getting clubbed into one single raw event & not getting split. Sharing the below. Rather splitting it into 3 diff ... See more...
HI SMEs,   I am having problem where logs coming from one of the syslog server are getting clubbed into one single raw event & not getting split. Sharing the below. Rather splitting it into 3 diff events it is coming under one single event. Kindly suggest any possible work around   Apr 14 17:30:50 172.10.10.10 %ASA-2-106006: Deny inbound UDP from 10.20.30.40/51785 to 172.10.10.10/162 on interface AI-VO-PVT Apr 14 17:30:50 10.20.30.40 12812500: RP/0/RP0/CPU0:Apr 14 17:30:50.489 IST: ifmgr[301]: %PK-5-UPDOWN : Line protocol on Interface GigabitEthernet0/0/0/18, changed state to Down Apr 14 17:30:50 10.225.124.136 TMNX: 258900 Base LOGGER-MINOR-tmnxLogFileDeleted-2009 [acct-log-id 18 file-id 22]: Log file cf3:\acttt\actof1822-20240414-075.xml.gz on compact flash cf3 has been deleted Apr 14 17:30:50 10.20.30.40 12812502: RP/0/RP0/CPU0:Apr 14 17:30:50.493 IST: fia_driver[334]: %PLATFORM-2_FAULT : Interface GigabitEthernet0/0/0/18, Detected Local Fault
>Also, why didn't Splunk containers crash when this kind of failure happen?  It's a race condition that happens if all of the following are true. 1. Using persistent queue 2. Received splunk m... See more...
>Also, why didn't Splunk containers crash when this kind of failure happen?  It's a race condition that happens if all of the following are true. 1. Using persistent queue 2. Received splunk metrics/introspection etc events from previous layer via splunktcpin port and cloning these events ( example `splunk_internal_metrics`  app). 3. Queues were blocked on the instance, which triggered metrics/introspection etc events getting on to PQ disk queue, events being read from PQ disk queue after splunk restart and cloned. If any instance avoids atleast one of above condition, will be able to avoid the crash. Any event hitting PQ disk, read from PQ disk after splunk restart and cloned will cause crash.
Good news is it's high priority issue for us, fixed for upcoming major release 9.3.0( conf release ). Backported to upcoming 9.0.x/9.1.x/9.2.x patches (9.2.2/9.1.5/9.0.10).
Hello,   Now it is working. This made the trick: [replicationDenylist] ms_graph = ...TA-microsoft-graph-security-add-on-for-splunk[/\\]bin[/\\]... Thanks
I have two fields (lets say.) AA and BB, I am trying to filter our results where AA and BB = 00 OR 10 using something like this - index="idx-some-index" sourcetype="dbx" source="some.*.source" | whe... See more...
I have two fields (lets say.) AA and BB, I am trying to filter our results where AA and BB = 00 OR 10 using something like this - index="idx-some-index" sourcetype="dbx" source="some.*.source" | where (AA AND BB)== (00 OR 10) But I am getting error as Error in 'where' command: Type checking failed. 'AND' only takes boolean arguments. I have also tried - index="idx-some-index" sourcetype="dbx" source="some.*.source" | where AA =(00 OR 10) AND (BB=(OO OR 10)) But I am getting same error as Type checking failed. 'OR' only takes boolean arguments.   Please help!
@gcusello @bowesmana  We are on splunk cloud and we use workload based management for licenseing i.e SVC . So the query which you are giving is not giving aggregate daily ingest per index for last 7 ... See more...
@gcusello @bowesmana  We are on splunk cloud and we use workload based management for licenseing i.e SVC . So the query which you are giving is not giving aggregate daily ingest per index for last 7 days
It appears the the guardduty logs are collected via cloudwatch which this TA supports (https://splunkbase.splunk.com/app/1876), so this is most likely what you need. I think the old TA's used ot b... See more...
It appears the the guardduty logs are collected via cloudwatch which this TA supports (https://splunkbase.splunk.com/app/1876), so this is most likely what you need. I think the old TA's used ot be seperate and now they have been combined into this one TA. See the different sourectypes - for you its aws:cloudwatchlogs:guardduty https://docs.splunk.com/Documentation/AddOns/released/AWS/DataTypes General info on thie TA https://docs.splunk.com/Documentation/AddOns/released/AWS/Description
Hi Team,   We actually want to send AWS Guard Duty logs to Splunk Cloud so what is the procedure to get it achieved since earlier we had an option i.e. Amazon GuardDuty Add-on for Splunk (https://s... See more...
Hi Team,   We actually want to send AWS Guard Duty logs to Splunk Cloud so what is the procedure to get it achieved since earlier we had an option i.e. Amazon GuardDuty Add-on for Splunk (https://splunkbase.splunk.com/app/3790) which is currently archived so do we have any add-on or app to collect the events and onboard the logs to Splunk. So kindly help to check and update on the same. 
We have integrated the AWS guard duty logs into Splunk through the S3 bucket.  Recently, we have noticed this error in our health check The file extension fileloaction.jsonl is not in a delimited f... See more...
We have integrated the AWS guard duty logs into Splunk through the S3 bucket.  Recently, we have noticed this error in our health check The file extension fileloaction.jsonl is not in a delimited file format.   Suggest me, how I can resolve this?  
Thank you all so far
I have checked the following: splunk list inputstatus /data/syslog/opswat/metadefender/10.x.y.z/2024-04-23-engine.log file position = 176673 file size = 176673 parent = /data/syslog/opswat/metad... See more...
I have checked the following: splunk list inputstatus /data/syslog/opswat/metadefender/10.x.y.z/2024-04-23-engine.log file position = 176673 file size = 176673 parent = /data/syslog/opswat/metadefender/10.x.y.z/*-engine.log percent = 100.00 type = finished reading Checking the file itself more /data/syslog/opswat/metadefender/10.x.y.z/2024-04-23-engine.log | wc 584 11089 176673 And in Splunk there are the following events for 2024-04-23 index=mb_secgw_cdr_tst_logs | stats count 460 I have selected an event from the syslog data and searched all over the index with no success, so probably the event is not indexed. Syslog-ng ist configured to write the log per host per day in separate files. And events are missing at random all over the day. I try to get more information from the HFs    
Hello, I have multiple dashboards where i user loadjob command since it is very useful to recycle big search results that need to be manipulated in other searches. I know that unscheduled jobs have... See more...
Hello, I have multiple dashboards where i user loadjob command since it is very useful to recycle big search results that need to be manipulated in other searches. I know that unscheduled jobs have a default TTL of 10 minutes and I don't want to change their default TTL globally in the system. Why I'm asking this question: if a User access a dashboard with the loadjob command in the code for example at 10AM and then he comes back to the dashboard at 10:15AM I don't want him to see a message about the non existence of the job (like 'Cannot load artifact' or something like this). To prevent this, I want to create a javascript that recovers the job Id that the User search generated and every 9 minutes increments its TTL by 10 minutes. So the javascript will have a loop where it will wait 9 minutes and change the job id TTL over and over again until the User closes the dashboard (only at that moment I'll let the job expire after 10 minutes so I don't overload the disk). So, the question is: if I have a job id X, how can I increment only its TTL by adding 10 minutes to it? Is there an SPL command? And how can I launch it from a Javascript? Thank you very much to anyone who'll help!