All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Try    preview=true   It's must be like that  curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results preview=true
Why I get empty results while I using REST API (results) Search on python? And when I using REST API (events) in Python to got like this  For your information the SID is already successfu... See more...
Why I get empty results while I using REST API (results) Search on python? And when I using REST API (events) in Python to got like this  For your information the SID is already successfully retreived using the python program and when I try to use curl command to search the SID jobs (curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results) the results is show on the screen without any error. Can you help me about this case ? Thank you   
Anyone coming here should know that in 9.2.0.1 this does not work any more. Look at dmc_instances_view_default_search macro for how the monitoring console does it now.
Hi @masakazu , let me understand: do you want to manage the Cluster Manager using the DS or do you want to directly manage the Indexers using the DS? the second option isn't possible. For the firs... See more...
Hi @masakazu , let me understand: do you want to manage the Cluster Manager using the DS or do you want to directly manage the Indexers using the DS? the second option isn't possible. For the first it's always better to deploy to the Cluster Manager apps (e.g. TA_indexes) and not the indexes.conf file in _cluster. Anyway, you have to configure as deployment folder the $SPLUNK_HOME/etc/managed-apps folder. Ciao. Giuseppe
Hi @Satyams14 , it isn't a good idea adding a new question, even if on the same topic, to another question because with a new question you could have a quicker and probably better answer. Anyway, a... See more...
Hi @Satyams14 , it isn't a good idea adding a new question, even if on the same topic, to another question because with a new question you could have a quicker and probably better answer. Anyway, as I said in the previous answer, you have to install the Fortinet Add-On on the UF/HF that you're using to receive data and on the Search Heads. As I said I hint to use a rsyslog receiver that writes the logs on files that you read using the UF. Ciao. Giuseppe
In an indexed cluster environment, I set the following stanza configuration in the deployment server's serverclass.conf file, but [Server class: splunk_indexer_master_cluster] stateOnClient = n... See more...
In an indexed cluster environment, I set the following stanza configuration in the deployment server's serverclass.conf file, but [Server class: splunk_indexer_master_cluster] stateOnClient = noop Whitelist = <ClusterManagerA> The _cluster folder under manager-app disappeared along with his Indexes.conf inside it. Fortunately, Indexes.conf remained in the cluster's peer app, so this was not a problem. If I want to use stateOnClient = noop, how should I maintain Indexes.conf deployed to the cluster on the cluster master?
I am using outlook as the external mail server ..Do you have any idea what value should I use in that mail server hostname?
I installed this app last week and have been experiencing the same problem. Has this issue been resolved since then?
Requirement - alert only needs to trigger outside window even if server is down in maintenance window | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.... See more...
Requirement - alert only needs to trigger outside window even if server is down in maintenance window | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=_time | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0) | eval is_server_down=if((host="xx.xx.xxx.xxx" AND count == 0) OR (host="xx.xx.xxx.xxx" AND count == 0) 1, 0 ) Trigger condition- |search is_maintenance window = 0 AND is_server_down=1 Alert is not getting triggered outside maintenance window even though one of server is down. Help me what is wrong in query or another possible solution
Hello @gcusello , I have the same scenerio in which i have architecture as follow: Fortinet analyzer> syslog forwarder(UF installed on it)>Deployment server>search head/indexer Could you confirm h... See more...
Hello @gcusello , I have the same scenerio in which i have architecture as follow: Fortinet analyzer> syslog forwarder(UF installed on it)>Deployment server>search head/indexer Could you confirm how we can install Fortinet add-on  on UF?
Hi @bowesmana, index="index A"  | table _time, Audit | addtotals fieldname=Total | foreach * [eval Audit=round (('Audit'/Total*100),2)] above is my query that i have created based on your idea, ... See more...
Hi @bowesmana, index="index A"  | table _time, Audit | addtotals fieldname=Total | foreach * [eval Audit=round (('Audit'/Total*100),2)] above is my query that i have created based on your idea, but seems not working. Below screenshot is the result for above query. the values not showing in percentage.    
Hi @KendallW, it's not working, it just staking the value of the bar chart.
That's true. Remember that docs pages have feedback form on the bottom. You can use it to... provide feedback. And yes, this feedback (of course if precise and reasonable, not just "I don't like this... See more...
That's true. Remember that docs pages have feedback form on the bottom. You can use it to... provide feedback. And yes, this feedback (of course if precise and reasonable, not just "I don't like this page" ;-)) is read and the docs pages do get better in time because of that.
Current documentation is very light regarding httpout, hope it will be improved in next versions
Which email provider are you planning to use? Do you have your own email server, or are you using gmail or another online email service?
"Connection refused" typically indicates that the target server is not listening on the target port. This would make sense if your splunk server is using "localhost" as the mail server hostname and y... See more...
"Connection refused" typically indicates that the target server is not listening on the target port. This would make sense if your splunk server is using "localhost" as the mail server hostname and you are not running an email server on your Splunk machine. If you would like to use an external mail server, then yes you should change the mail server hostname in email settings to match the external mail server.
Hello,   I am facing same issue as you ...I am not receiving email alerts from splunk ....Instead of localhost what name should I kept for  mail server host name?  Could you please suggest
@marnall   after using your query I am  getting error message as " connection refused" in my search results.. Should I change my localhost to something else in the mail server hostname in email sett... See more...
@marnall   after using your query I am  getting error message as " connection refused" in my search results.. Should I change my localhost to something else in the mail server hostname in email settings in splunk UI ?Please do let me know
Hi @maede_yavari , the message means that you have to copy the app.conf from the default folder to the local one. Then, there's an error in outputs.conf: check it, if you want share it, eventually ... See more...
Hi @maede_yavari , the message means that you have to copy the app.conf from the default folder to the local one. Then, there's an error in outputs.conf: check it, if you want share it, eventually masking IP addresses. Ciao. Giuseppe
Hi @NoSpaces, You can reduce the log level of WorkloadsHandler in %SPLUNK_HOME%\etc\log-local.cfg. Create the file if it does not exist and add the following: [splunkd] category.WorkloadsHandler=FA... See more...
Hi @NoSpaces, You can reduce the log level of WorkloadsHandler in %SPLUNK_HOME%\etc\log-local.cfg. Create the file if it does not exist and add the following: [splunkd] category.WorkloadsHandler=FATAL Restart Splunk to allow the change to take effect. You can temporarily change the active level on a running instance from Settings > Server settings > Server logging > WorkloadsHandler or using the REST API: curl -k -u admin https://localhost:8089/services/server/logger/WorkloadsHandler -d level=FATAL curl is shipped with all modern releases of Windows, but you can use whichever HTTP client you prefer.