All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it shou... See more...
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it should run on other hours on Wednesday and Friday as well  (apart from 5AM to 8AM) One cron is not able to achieve that. Hence want to change in the alert logic.
Hi @PickleRick, There are multiple sources. I see event until November, from December zero events.   Thank you, Mattia
Hi @AMAN0113 , it's possible to share the DS only if it has to manage less than 50 clients. Even if, the most work in on the devices: if you are managing the addressing od the Deployment Server us... See more...
Hi @AMAN0113 , it's possible to share the DS only if it has to manage less than 50 clients. Even if, the most work in on the devices: if you are managing the addressing od the Deployment Server using a dedicated app, it's very easy to move the DS to another server, if not it's better to move the other servers. Anyway, To move the DS, you should create a custom Add-On, called e.g. TA_deploymentClient, containing only two files: apps.conf deplymentclient.conf So these are the steps to move the DS: on the old DS, add a new serverclass  for all clients pointing to the new add-on TA_deploymentClient on all clients remove deploymentclient.conf from the %SPLUNK_HOME/etc/system/local folder restart Splunk on all clients verify (on the DS) that in all clients there's a new app installed install Splunk on a new system, copy apps from $SPLUNK_HOME/etc/deployment-apps to the same folder of the new system copy %SPLUNK_HOME/etc/system/local/serveclasses.conf in thesame folder of the new system, push new new configurations To move the HEC, you have to re-create the inputs, eventually using the same tokens, but, in the devices, you have to point to the new destination. It could be easier if you have a Load Balancer in front of the the HEC servers, becausein this case, you have only to modify the configuration in the LB. Same thing for the syslog receivers: you have to  copy all the inputs from th old to the new servers and then move the pointing adresses in each device. See what' the highest number of connection you have (syslog, clients or HEC) and maintain it in the original server, moving the others. Ciao. Giuseppe
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we a... See more...
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we added credentials (same with which we connect to our splunk cloud instance) and URL in Power BI Get Data options but we are getting below error:   Steps: POWER BI Desktop -> Get Data -> Other -> ODBC ->  and when clicked ok, above mentioned error is displayed. Can you please suggest how I can fix this? Thank you.
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried ... See more...
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried to investigate the issue i found out that index=_internal is empty I guess its related. Can you please help me figure out why the index is empty and the health monitor is not working ?
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Win... See more...
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Windows Server versions by creating a dummy string i regedit. It does not work in Server 2019. I have a log fil but it is to big to be inserted in my post here. splunk log   
OK. _If_ you are seeing current events in other indexes, it should mean that your "main" part of the environment is working relatively ok. We don't have much info about your setup so we don't know w... See more...
OK. _If_ you are seeing current events in other indexes, it should mean that your "main" part of the environment is working relatively ok. We don't have much info about your setup so we don't know whether this index you mention should contain events from multiple sources or just one source. If it's just one source, it may be that something caused that source to stop sending the events. Maybe due to turning off the forwarder or due to network problems. If it's an index gathering data from multiple sources... are you sure someone didn't delete it from your setup? Do you see any events in this index and just don't see recent evens or do you not see any events at all, even the old ones? What are your index parameters? (size limits, retention settings).
Hi @rafamss, yes, licensi is active.   Mattia
Hi @PickleRick, 1 - I can see other events in other indexes 2 - One month ago I restarted KV store, I didn't make other changes. 3 -I'm ingesting data, there aren't frozen data. What should I exp... See more...
Hi @PickleRick, 1 - I can see other events in other indexes 2 - One month ago I restarted KV store, I didn't make other changes. 3 -I'm ingesting data, there aren't frozen data. What should I expect regarding the index from inputs.conf file? Thank you in advance. Mattia
this api don't have app version 
Hi, can you help me know what were you running on port 8088 except the HEC?
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't f... See more...
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't find the solution,. please share the solution with linux commands as well.  
Could you kindly paste the screenshot of the precise error you are receiving?
Is there any other option which  can be monitor tomcat?  Please suggest! Regards, Eshwar
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the l... See more...
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the license definitely is not expired and also not even close to exceeded, and restarting splunk service on the indexer always clears the error.  Unfortunately not much more is provided by the splunk logs that would indicate anything I can investigate. Has anyone ever ran into similar, or might know where I can look to troubleshoot this further?  It's making my life pretty tough because I have to constantly be restarting indexers due to this error.
We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B... See more...
We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B to take care of HEC Services and so on. What would be the easiest approach to achieve this?  Seems like a lot of work. Would it be recommended to do so in the first place? What are the criteria we should have in mind while doing this migration.
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest... See more...
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest_eval on the ip address and use a lookup to change the host If i run the cmd from search i get the required result: index=... | eval host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") this replaces host with "host_value" I have this working on an AIO instance with the following config below: Now adding to HF tier : /opt/splunk/etc/apps/myapp/lookups/lookup.csv lookup has global access and export = system host,host_value 1.2.3.4, myhostname props.conf: [mysourcetype] TRANSFORMS-host_override = host_override transforms.conf: [host_override] INGEST_EVAL =host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") When applied on the HF (restarted)  i see some of the hostnames are changed to "localhost" the others remain unchanged (but this is due to the config not working OR the data coming from another HF with the test config not applied Any ideas - thx
Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or som... See more...
Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or something similar? I already know that this one exists https://www.splunk.com/page/release_rss But it is not up to date. Thanks, Zarge
| bin _time span=1d | stats sum(SuccessCount) as SuccessCount sum(FailedCount) as FailedCount by _time
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval Succ... See more...
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval SuccessCount=if(Total_Status="0", count, Success), FailedCount=if(Total_Status!="0", count, Failed) OUTPUT: Total_Status _time count FailedCount SuccessCount 0 2022-01-12 13:30 100   100 0 2022-01-12 13:00 200   200 0 2022-01-13 11:30 110   110 500 2022-01-13 11:00 2 2   500 2022-01-11 10:30 4 4   500 2022-01-11 10:00 8 8     But i want the output as shown below table: _time SuccessCount FailedCount 2022-01-13 110 2 2022-01-12 300 0 2022-01-11 0 12