All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've tried the below with the fieldformat before and after the chart command, same results, the duration_U field still shows as a unix date, to the chart is technically correct, but the y axis inform... See more...
I've tried the below with the fieldformat before and after the chart command, same results, the duration_U field still shows as a unix date, to the chart is technically correct, but the y axis information is not human readable.   Just shows values ranging from 70,000 to 90,000. index= source= | strcat date "000000" BDATE | eval duration_U=strptime(end_time,"%Y-%m-%d %H:%M:%S.%N") - strptime(BDATE,"%Y%m%d%H%M%S") |fieldformat duration_U=tostring(duration_U,"duration")| chart latest(duration_U) over system by date  
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the pos... See more...
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the possibility to use the httpout instead of tcpout, due to traffic filtering.   The documentation seems a bit light about httpout, is it possible to use Indexer loadbalancer, ack, and maxQueueSize function? Thanks for your help!   Jonas
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of... See more...
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of the problem: 1. Inconsistent Sourcetype Ingestion: Upon integrating the Group-IB threat intel feeds and installing the corresponding app on my Search Head, I've noticed inconsistent behavior in terms of sourcetype ingestion. Sometimes only one sourcetype is ingested, while other times it's five or seven. This variability is puzzling, and I'm not sure what's causing it. 2. Ingestion Interruption: Additionally, after a few days of seemingly normal ingestion, I observed that the ingestion process stopped abruptly. Upon investigating further, I found the following message in the logs: *Health Check msg="A script exited abnormally with exit status 1" input="opt/splunk/etc/apps/gib_tia/bin/gib_tia.py" stanza = "xxx"* This message indicates that the intelligence downloads of a specific sourcetype have failed on the host. This issue is critical for our security operations, and I'm struggling to identify and resolve the root cause. If anyone has encountered similar challenges or has insights into troubleshooting such issues with threat intel feed integrations, I would greatly appreciate your assistance. Thanks in advance,
@rzv424  Solution1: You can create two alerts with the same logic with different CRONs. 1st alert CRON will run every day except on Wed and Fri. Cron is: */30 * * * 0,1,2,4,6 Second alert CRON w... See more...
@rzv424  Solution1: You can create two alerts with the same logic with different CRONs. 1st alert CRON will run every day except on Wed and Fri. Cron is: */30 * * * 0,1,2,4,6 Second alert CRON will run every 30 minutes on Wednesday and Friday and will stop from 5AM to 8AM. Cron is: */30 0-5,8-23 * * 3,5 Solution2: You can create one alert with a CRON to run every day of the week at 30 minutes interval, Cron is */30 * * * * And you can add the filtering at the logic of query itself: Use an EVAL command to output the current day and hour after your logic ends. and then filter or don't show your outputs as per your exception requirement ......| eval now_day=strftime(now(), "%a"), now_hour=strftime(now(), "%H") | search NOT ((now_day="Wed" AND (now_hour="5" OR now_hour="6" OR now_hour="7" OR now_hour="8")) OR (now_day="Fri" AND (now_hour="5" OR now_hour="6" OR now_hour="7" OR now_hour="8")))
You have two options: Duplicate the alert and use a different cron expression for the different days/time periods Use now() function to determine when the search is running and modify the results ... See more...
You have two options: Duplicate the alert and use a different cron expression for the different days/time periods Use now() function to determine when the search is running and modify the results so that the alert isn't triggered.
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it shou... See more...
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it should run on other hours on Wednesday and Friday as well  (apart from 5AM to 8AM) One cron is not able to achieve that. Hence want to change in the alert logic.
Hi @PickleRick, There are multiple sources. I see event until November, from December zero events.   Thank you, Mattia
Hi @AMAN0113 , it's possible to share the DS only if it has to manage less than 50 clients. Even if, the most work in on the devices: if you are managing the addressing od the Deployment Server us... See more...
Hi @AMAN0113 , it's possible to share the DS only if it has to manage less than 50 clients. Even if, the most work in on the devices: if you are managing the addressing od the Deployment Server using a dedicated app, it's very easy to move the DS to another server, if not it's better to move the other servers. Anyway, To move the DS, you should create a custom Add-On, called e.g. TA_deploymentClient, containing only two files: apps.conf deplymentclient.conf So these are the steps to move the DS: on the old DS, add a new serverclass  for all clients pointing to the new add-on TA_deploymentClient on all clients remove deploymentclient.conf from the %SPLUNK_HOME/etc/system/local folder restart Splunk on all clients verify (on the DS) that in all clients there's a new app installed install Splunk on a new system, copy apps from $SPLUNK_HOME/etc/deployment-apps to the same folder of the new system copy %SPLUNK_HOME/etc/system/local/serveclasses.conf in thesame folder of the new system, push new new configurations To move the HEC, you have to re-create the inputs, eventually using the same tokens, but, in the devices, you have to point to the new destination. It could be easier if you have a Load Balancer in front of the the HEC servers, becausein this case, you have only to modify the configuration in the LB. Same thing for the syslog receivers: you have to  copy all the inputs from th old to the new servers and then move the pointing adresses in each device. See what' the highest number of connection you have (syslog, clients or HEC) and maintain it in the original server, moving the others. Ciao. Giuseppe
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we a... See more...
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we added credentials (same with which we connect to our splunk cloud instance) and URL in Power BI Get Data options but we are getting below error:   Steps: POWER BI Desktop -> Get Data -> Other -> ODBC ->  and when clicked ok, above mentioned error is displayed. Can you please suggest how I can fix this? Thank you.
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried ... See more...
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried to investigate the issue i found out that index=_internal is empty I guess its related. Can you please help me figure out why the index is empty and the health monitor is not working ?
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Win... See more...
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Windows Server versions by creating a dummy string i regedit. It does not work in Server 2019. I have a log fil but it is to big to be inserted in my post here. splunk log   
OK. _If_ you are seeing current events in other indexes, it should mean that your "main" part of the environment is working relatively ok. We don't have much info about your setup so we don't know w... See more...
OK. _If_ you are seeing current events in other indexes, it should mean that your "main" part of the environment is working relatively ok. We don't have much info about your setup so we don't know whether this index you mention should contain events from multiple sources or just one source. If it's just one source, it may be that something caused that source to stop sending the events. Maybe due to turning off the forwarder or due to network problems. If it's an index gathering data from multiple sources... are you sure someone didn't delete it from your setup? Do you see any events in this index and just don't see recent evens or do you not see any events at all, even the old ones? What are your index parameters? (size limits, retention settings).
Hi @rafamss, yes, licensi is active.   Mattia
Hi @PickleRick, 1 - I can see other events in other indexes 2 - One month ago I restarted KV store, I didn't make other changes. 3 -I'm ingesting data, there aren't frozen data. What should I exp... See more...
Hi @PickleRick, 1 - I can see other events in other indexes 2 - One month ago I restarted KV store, I didn't make other changes. 3 -I'm ingesting data, there aren't frozen data. What should I expect regarding the index from inputs.conf file? Thank you in advance. Mattia
this api don't have app version 
Hi, can you help me know what were you running on port 8088 except the HEC?
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't f... See more...
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't find the solution,. please share the solution with linux commands as well.  
Could you kindly paste the screenshot of the precise error you are receiving?
Is there any other option which  can be monitor tomcat?  Please suggest! Regards, Eshwar
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the l... See more...
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the license definitely is not expired and also not even close to exceeded, and restarting splunk service on the indexer always clears the error.  Unfortunately not much more is provided by the splunk logs that would indicate anything I can investigate. Has anyone ever ran into similar, or might know where I can look to troubleshoot this further?  It's making my life pretty tough because I have to constantly be restarting indexers due to this error.