All Topics

Top

All Topics

Hi, I need to delete some KV Store Collections, and the only way I have to perform this kind of action is using the REST API, since I'm on Splunk Cloud. When I create KV Store Collections, I use th... See more...
Hi, I need to delete some KV Store Collections, and the only way I have to perform this kind of action is using the REST API, since I'm on Splunk Cloud. When I create KV Store Collections, I use this request, following the docs on Use the Splunk REST API to manage KV Store collections and data in Splunk Cloud Platform or Splunk Enterprise :    curl -k -u USER -d name=KV-COLLECTION-NAME https://HOSTNAME.splunkcloud.com:8089/servicesNS/nobody/APP_NAME/storage/collections/config   What I would like to know is how can I delete the KV Store Collection using the same approach.  Thanks
Hi Team, I am working in multisite cluster enviroment. we need to perform certificate renewals with 21 passphrase key. Same passphrase we are using in all server.conf file across the multisite clus... See more...
Hi Team, I am working in multisite cluster enviroment. we need to perform certificate renewals with 21 passphrase key. Same passphrase we are using in all server.conf file across the multisite cluster.  during cluster restart, we come across " Master node down" found in serverd.log file and unable to progress. during investigation found, [general] stanza passphrase is encrypted  valued of changeme is coming up instead of  encrypted value of 21 passphrase key and [clustering] stanza passphrase is encrypted value of 21 passphrase key is coming up. To make cluster to work as is, i think  we need to over write default password (i.e. changeme) in server.conf with custom password (i.e. 21 passphrase key) but currently not happening. We need to complete certificates renewals prior to 30/10/2021. This is its bit urgent. any help would be appreciated. Many Thanks Lalitha
How to use curl to overwrite host or query of an alert i was testing the below for example where i need to overwrite the SPL inside of a alert . Ideally i just want to overwrite the  host in the SPL... See more...
How to use curl to overwrite host or query of an alert i was testing the below for example where i need to overwrite the SPL inside of a alert . Ideally i just want to overwrite the  host in the SPL query and another variable . However it seems i need to overwrite the full query          curl -k -u dev_admin:devadmin https://localhost:8089/servicesNS/admin/lookup_editor/saved/searches/KPI_Alert_TEMPLATE -d cron_schedule="31 17 * * *" search="index=mlc_live | stats count(host) by host"          
Hi folks, A user in my company discovered that the pre-built list of Correlation-Searches in the filter on the Incident Review dashboard is incomplete. I can well retreive the correlation searches ... See more...
Hi folks, A user in my company discovered that the pre-built list of Correlation-Searches in the filter on the Incident Review dashboard is incomplete. I can well retreive the correlation searches in Content view and in Alerts views, and they have triggeresd notables. I tried to find out the search ran to populated it but my skill in html/js are not enough. Any idea ?   Thanks!  
Hi all, I’m just about to upgrade our Phantom / Splunk SOAR version to 5.0.1. The Version Compatibility matrix in the documentation for the Phantom Remote Search app suggests that this version isn’t... See more...
Hi all, I’m just about to upgrade our Phantom / Splunk SOAR version to 5.0.1. The Version Compatibility matrix in the documentation for the Phantom Remote Search app suggests that this version isn’t supported though (  https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.17/PhantomRemoteSearch/Abouttheapp ) I’m sure that it is compatible but could someone please confirm before I upgrade my Production Phantom platform. Also, just an observation…. 14 indexes!!!! Would it not be more in keeping with general recommendations / strategy to have 1 index (or more for multiple Phantom instances) and have multiple sourcetypes? many thanks, Mark
I am in the process of integrating AppDynamics with our build tool. I am going to add a step in the build to create an APPLICATION_DEPLOYMENT event whenever there is a code deployment to the producti... See more...
I am in the process of integrating AppDynamics with our build tool. I am going to add a step in the build to create an APPLICATION_DEPLOYMENT event whenever there is a code deployment to the production server. I have experimented with the event generation API and have it working. The API documentation states that events have a lifespan of two weeks unless the event is archived. Is there a way to have an event automatically archived as we want to have deployment events last for a year? Is there another way to capture what we need? Dale Chapman
Invalid key in stanza [workday://user_activity] in /opt/splunk/etc/apps/TA-workday/local/inputs.conf, line 2: include_target (value: 0). [workday://user_activity] include_target = 0 index = workda... See more...
Invalid key in stanza [workday://user_activity] in /opt/splunk/etc/apps/TA-workday/local/inputs.conf, line 2: include_target (value: 0). [workday://user_activity] include_target = 0 index = workday input_name = user_activity interval = 300 include_target_details = 0 Need some help with this one trying to ingest logs from my Workday TA, logs stopped reporting.
Hi I need to use a post process search for displaying a timechart Here is my id configuration   <search id="test"> <query>index=tutu sourcetype="ica" $source$ $type$ $domain$ $site$ $ezconf... See more...
Hi I need to use a post process search for displaying a timechart Here is my id configuration   <search id="test"> <query>index=tutu sourcetype="ica" $source$ $type$ $domain$ $site$ $ezconf$ | fields ica_latency_last_recorded ica_latency_session_avg idle_sec site host</query> <earliest>-7d@h</earliest> <latest>now</latest> </search>   and here is base configuration   <search base="test"> <query> | search idle_sec &lt; 300 | timechart span=1d avg(ica_latency_session_avg) as "Latence moyenne de la session (ms)"</query> </search>   as you can see my timechart is on the last 7 days but any values are retuned what is wrong please?
I have a python script which makes an API call and get the events . Number of events, its collecting are correct however its adding duplicate entries per field. Can you please assist, I am ? Here is... See more...
I have a python script which makes an API call and get the events . Number of events, its collecting are correct however its adding duplicate entries per field. Can you please assist, I am ? Here is my script response = helper.send_http_request(rest_url, 'GET' ,parameters=queryParam, payload=None,headers=headers, cookies=None,verify=False, cert=None, timeout=None, use_proxy=False) r_headers = response.headers r_json = response.json() r_status = response.status_code if r_status !=200:     response.raise_for_status() final_result = [] for _file in r_json:     responseStr=''     fileid = str(_file["fileid"])     state = helper.get_check_point(str(fileid))     if state is None:         final_result.append(_file)         helper.save_check_point(str(str(fileid)), "Indexed") event=helper.new_event(json.dumps(final_result), time=None, host=None, index=None, source=None, sourcetype=None, done=True, unbroken=True) ew.write_event(event) response: [ { "fileid": "abc.txt", "source": "source1", "destination": "dest1", "servername": "server1", }, { "fileid": "xyz.txt", "source": "source2", "destination": "dest2", "servername": "server2", } ] Response after collecting data to Index looks as below: fileid source destination servername "abc.txt abc.txt source1 source1 dest1 dest1 server1 server1 xyz.txt xyz.txt source2 source2 dest2 dest2 server2 server2
Starting our journey into Splunk and need some help. I am trying to send and alert when a new version of antivirus is installed on our machines. I am monitoring the application windows event log, so... See more...
Starting our journey into Splunk and need some help. I am trying to send and alert when a new version of antivirus is installed on our machines. I am monitoring the application windows event log, so it would be something like grab the version from 20 minutes ago in the logs and if different than current version send the alert. "Message=Windows Installer installed the product. Product Name: Antivirus Software. Product Version: 1.0.0.000.1. Product Language: 001. Manufacturer: Antivirus. Installation success or error status: 0" Any ideas on how to start this search?
Hi, I have a query like this: index=star eventtype=login-history action=success Username=** | stats count by Username | sort - count | head 10 So in my result I have a list of username with the... See more...
Hi, I have a query like this: index=star eventtype=login-history action=success Username=** | stats count by Username | sort - count | head 10 So in my result I have a list of username with the login count for each one. I know some users are bot, so I want to add a string before the username like BOT_Username, probably with if condition. For example, in my result I have: Alice  10 Bob   8 Carol    7 David   4 Eddie  2 I know Alice and Bob are bot, so I need: BOT_Alice  10 BOT_Bob   8 Carol    7 David   4 Eddie  2 Thanks in advance!  
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Ind... See more...
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Indexer 1 6.5 7.0.0 Indexer 2 6.5 7.0.0 Indexer 3 6.5 7.0.0 Indexer 4 6.5 7.0.0   I have 4 new servers that I want to build as indexers to replace the 4 indexers that I currently have (while moving from Splunk 7.0.0 to Splunk 8.x in the process). My first plan was to build the 4 new indexers and then join them to the current indexer cluster so that existing data could replicate between them and then I could retire the old indexers from the cluster. The problem with this idea is that my existing indexers are running CentOS 6.x and my new indexers will be running CentOS 7.x - as I understand it from reading documentation it is not possible to have different OS versions running in the same indexer cluster. As there is no easy way to upgrade from CentOS 6.x to CentOS 7.x, I'd rather avoid having to do this. So my next idea was to build the new indexers as a separate cluster - so that I have one cluster of CentOS 6.x indexers and one cluster of CentOS 7.x indexers - then send new data only to the new cluster and let old data eventually age off on the old cluster at which point I can retire it. During this time I will have a Search Head that searches across both clusters, so that old and new data is searchable. My question on that is: Will my Search Head running on CentOS 6.x be able to search across both a CentOS 6.x indexer cluster and a CentOs 7.x indexer cluster? Should I instead look at creating the second cluster of indexers and then manually migrating data between the old and new clusters? Although that seems like a harder process. How would you approach this?  
Hi everyone, I have strange Splunk behavior regarding one of the indexes but first a little bit of background: Environment is indexer cluster with 1 SH Proxy logs are getting ingested from syslog... See more...
Hi everyone, I have strange Splunk behavior regarding one of the indexes but first a little bit of background: Environment is indexer cluster with 1 SH Proxy logs are getting ingested from syslog server via universal forwarder (monitor input) Monitor input uses host_segment option to extract host data Sourcetype is set to "cisco:wsa:squid" from splunkbase app "Splunk_TA_cisco-wsa". I'm not using any local configuration for that sourcetype (on any instance) There are no props.conf stanzas that apply configuration based on source or host (i.e. [host::something]) for this specific source or host The issue: When I'm using the search 1 (with field "host") in fast mode it is 10 to 20 times slower than using search 2. Search 1   index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, host, index, splunk_server, _raw     Search 2   index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, index, splunk_server, _raw   I have already reviewed full configuration an there is no configuration on any of the instances that is modifying field "host" in any way and when I use it in my search it is drastically slower which is causing issues further down the line. This issue does not manifest on other indexes. All indexes are configured with same options in indexes.conf Hope someone can give me a good clue for troubleshooting.
I was trying to onboard data from Cisco Meraki when I've noticed the following: The Splunk Add-on for Cisco Meraki is currently getting data from only the first network_id from the Organization. An... See more...
I was trying to onboard data from Cisco Meraki when I've noticed the following: The Splunk Add-on for Cisco Meraki is currently getting data from only the first network_id from the Organization. Anyone could help me understand how could we enhance the code so that it will fetch data from all the networks in the Organization?
Hi, I have some single string log statements looking like the following:   INFO ControllerNameHere f1d46561-b382-4685-9d7a-ebd76f40c355 EXT | <action> | Time 80   I want to make a query that gro... See more...
Hi, I have some single string log statements looking like the following:   INFO ControllerNameHere f1d46561-b382-4685-9d7a-ebd76f40c355 EXT | <action> | Time 80   I want to make a query that groups the <action> types and then calculates the min, max and avg of the Time part of the string. So far i have had success with average:   index=my_index* host=hostName EXT | rex field=_raw "(EXT \| )(?<CategoryString>.+)( \| Time)" | rex field=_raw "(\| Time)(?<TimeValue>.+)" | stats mean(TimeValue) BY CategoryString   which returns the mean value of all entries. The data foundation of the query looks like: All is grouped by 4 actions per one request, meaning that the first action: has the following values: 80, 45, 71, 63, 458. When i then run the above query i get a correct result with mean: But when switching to e.g. max, i get a maximum value of 80. which seems wrong, however it corresponds to the latest value. What am i missing here?  
Hi All I have followed the instructions under  "https://docs.splunk.com/Documentation/AddOns/released/Tomcat/Recommendedfields" But as you can see on the pic below there are question marks before ... See more...
Hi All I have followed the instructions under  "https://docs.splunk.com/Documentation/AddOns/released/Tomcat/Recommendedfields" But as you can see on the pic below there are question marks before and after the field value. Why is that needed? Thank you in advance for your help.
I'll probably find my solution finally but if someone has something at hand, I'd be grateful for sharing I have some results. Let's say they are like this: Count FieldA FieldB 11 a   ... See more...
I'll probably find my solution finally but if someone has something at hand, I'd be grateful for sharing I have some results. Let's say they are like this: Count FieldA FieldB 11 a   12 b   34 c 1 54 d 1 462 e   0 f 3 12 g 3 4 h 3   I would like the values from the count column summed up but only for the events that have FieldB defined. For the rest, I want them lest split by FieldA. For those summed up I want the FieldA to be aggregated into a multivalue field So effectively the output should be like Count FieldA FieldB 11 a   12 b   88 c d 1 462 e   16 f g h 3   OK. I think I can get it done by adding another column being created conditionally either from fieldA or fieldB, then aggregating by this field. Something like this: <initial search> | eval tempfield=if(isnull(fieldB),"fieldA-".fieldA,"fieldB-".fieldB) | stats sum(count) as count values(fieldA) as fieldA values(fieldB) as fieldB by tempfield | fields - tempfield Any nicer way?
Dears,  We have the deployment server in DMZ zone and indexers are in DRN zone. So windows team is pushing the packages using SCCM to our DMZ deployment servers and we can see those clients in our d... See more...
Dears,  We have the deployment server in DMZ zone and indexers are in DRN zone. So windows team is pushing the packages using SCCM to our DMZ deployment servers and we can see those clients in our deployment servers but we are not seeing single logs in our splunk that means data is not indexing into our splunk.  Please find the attached architecture screenshot for your reference .  More details :  1. Deployment servers in DMZ zone 2. Indexers are in DRN zone    ################# The below one is for Windows DMZ  log sources to windows universal forwarder [root@********local]# cat outputs.conf [tcpout] defaultGroup = xxxx_idx_win_prod indexAndForward = false [indexAndForward] index = false [tcpout:xxxx_idx_win_prod] autoLBVolume = 1048576 server = xxxxsplkwinfrwdr001.xxxxx.xx.xxxx:9997, xxxxsplkwinfrwdr002.xxxxx.xx.xxxx:9997 sslPassword = password clientCert = $SPLUNK_HOME/etc/auth/server.pem autoLBFrequency = 5 useACK = true ######################################## Deployment server configuration :- This will applicable for PROD DRN indexers - Forwarders to indexers  /opt/splunk/etc/deployment-apps/xx-xxxx_xxxx_idx_prod_outputs/local cat outputs.conf [tcpout] defaultGroup = xxxx_idx_prod indexAndForward = false [indexAndForward] index = false [tcpout:xxxx_idx_prod] autoLBVolume = 1048576 server = <all drn indexers ip address mentioned here with 9997 port> sslPassword = password clientCert = $SPLUNK_HOME/etc/auth/server.pem autoLBFrequency = 5 useACK = true   ####################### inputs.conf  cat inputs.conf [splunktcp-ssl:9997] disabled=0 [SSL] sslPassword = password clientCert = $SPLUNK_HOME/etc/auth/server.pem  Kindly advise us on this.   
If you look at the picture I cant see the real time alert option, Could you please assist me to get this on my splunk ?
I created a HEC token call test_app initially for accepting log data from a test app.  That app has morphed into a prod app.  I would like to change the HEC token name to prod_app.  How do I do that?... See more...
I created a HEC token call test_app initially for accepting log data from a test app.  That app has morphed into a prod app.  I would like to change the HEC token name to prod_app.  How do I do that?  Thanks.