All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I would like to replace the Splunk self signed certificate on a heavy forwarder for Splunk Web and found a document called  "Configure Splunk Web to use TLS certificates".  We want to use a vali... See more...
Hi I would like to replace the Splunk self signed certificate on a heavy forwarder for Splunk Web and found a document called  "Configure Splunk Web to use TLS certificates".  We want to use a valid signed certificate so user's don't get the untrusted web site warning from their browsers. Will changing just the Splunk Web ssl certificate have any effect on the secure communications between Splunk Enterprise components?  If someone can point me in the right direction, that would be great!   Thanks Tim
Hello. I'm super new to Splunk(love to tool for assessing JuniperFW logs) but I'm being tasked at a new job with something out of my zone. I'm a Palo Alto guy but my boss would like our Forcepoint lo... See more...
Hello. I'm super new to Splunk(love to tool for assessing JuniperFW logs) but I'm being tasked at a new job with something out of my zone. I'm a Palo Alto guy but my boss would like our Forcepoint logs to run through this app. I have it installed but this string tstats average =false count FROM datamodel=mail_log  gives me an error saying it can't find that datamodel. Now this model is fully operational under PP for Proofpoint with all permissions available to all. I did check that. Can anyone help me with this? I'm using Splunk Enterprise 8.2.4 and DomainTools 4.3.0. Thank you in advance!
What is the use of (index cim sourcetype modular:alert:risk). What happens if it stops generating logs?
https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/MigrateKVstore#Upgrade_KV_store_server_to_version_4.2 Upgraded Splunk Enterprise version 9.0.0 from 8.2.5 Looking to see how to upgrade m... See more...
https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/MigrateKVstore#Upgrade_KV_store_server_to_version_4.2 Upgraded Splunk Enterprise version 9.0.0 from 8.2.5 Looking to see how to upgrade mongo from 4.0 to 4.2 on a single instance deployment.  During the Splunk Enterprise upgrade the migration to wiredTiger failed due to lack of disk space, the upgrade still continued and made the first hop of the mongo upgrade from version 3.6 to 4.0, it looks like after version 4.0 it tried to do the engine migration but couldn't because the lack of available disk space and therefore didn't do the last hop to version 4.2 of mongo.  We have since fixed the disk space issue and were able to complete the engine migration to wiredTiger, however don't know how to bump up the mongo version to 4.2.  The above link is for upgrading mongo in a cluster but not on a single instance, when looking at the options in splunk help kvstore I don't see anything for upgrading mongo either for a single instance, tried splunk start-shcluster-upgrade kvstore -version 4.2 -isDryRun true but of course it detected it wasn't a searchhead cluster.  Lastly trying to understand the difference in the output of mongo versionsbetween kvstore-status command versus splunk cmd mongod -version, clearly pulling from two different places. [App Key Value Store migration] Starting migrate-kvstore. Created version file path=/opt/splunk/var/run/splunk/kvstore_upgrade/versionFile36 Started standalone KVStore update, start_time="2022-06-22 15:21:46". [App Key Value Store migration] Checking if migration is needed. Upgrade type 1. This can take up to 600seconds. [App Key Value Store migration] Migration is not required. Created version file path=/opt/splunk/var/run/splunk/kvstore_upgrade/versionFile40 Not enough space to upgrade KVStore (or backup). You will need requiredBytes=3107201024 bytes, but KV Store DB filesystem only has availableBytes=2286272512 [App Key Value Store migration] Starting migrate-kvstore. [App Key Value Store migration] Storage Engine hasn't been migrated to wireTiger. Cannot upgrade to service(42) [splunk ~/var/run/splunk/kvstore_upgrade]$ splunk show kvstore-status --verbose |grep serverVersion serverVersion : 4.0.24 [splunk ~/var/run/splunk/kvstore_upgrade]$ [splunk ~/var/run/splunk/kvstore_upgrade]$ splunk cmd mongod -version db version v4.2.17-linux-splunk-v3 git version: be089838c55d33b6f6039c4219896ee4a3cd704f OpenSSL version: OpenSSL 1.0.2zd-fips 15 Mar 2022 allocator: tcmalloc modules: none build environment: distmod: rhel62 distarch: x86_64 target_arch: x86_64 [splunk ~/var/run/splunk/kvstore_upgrade]$
Hi Good Morning , Web UI in indexer is not starting though the following setting in place in ..../system/default/web.conf   [settings] # enable/disable the appserver startwebserver = 1 # First ... See more...
Hi Good Morning , Web UI in indexer is not starting though the following setting in place in ..../system/default/web.conf   [settings] # enable/disable the appserver startwebserver = 1 # First party apps: splunk_dashboard_app_name = splunk-dashboard-studio # enable/disable splunk dashboard app feature enable_splunk_dashboard_app_feature = true # port number tag is missing or 0 the server will NOT start an http listener # this is the port used for both SSL and non-SSL (we only have 1 port now). httpport = 8000   Then added a ..../system/local/web.conf with the following to see if it enable but still the web UI is disabled: enableSplunkWebSSL = true   Any help is greatly appreciated
Hi all, day1 splunker here.  I'd like to use an ingested start and stop time in index BLUE and use it to range-filter events in from index RED.   using the splunk event _time on the RED is ok.   Just... See more...
Hi all, day1 splunker here.  I'd like to use an ingested start and stop time in index BLUE and use it to range-filter events in from index RED.   using the splunk event _time on the RED is ok.   Just a nudge in the right direction is what I'm after... thanks all
Hi Team,   Is there any way to use REST syntax and retrieve the following. 1. Rest Query to retrieve all unique searches performed on a given index and count no of times it was searched    
We recently upgrade the Add-on for Cisco ASA from versión 3.4.0 to 5.0.0. In versión 3.4.0 KV_MODE was set to Auto and this meant that a lot of informatión from messages from the DAP (734*) was extr... See more...
We recently upgrade the Add-on for Cisco ASA from versión 3.4.0 to 5.0.0. In versión 3.4.0 KV_MODE was set to Auto and this meant that a lot of informatión from messages from the DAP (734*) was extracted into a named field. I.e. for this log: Jun 24 13:52:39 fwhost %ASA-7-734003: DAP: User username, Addr A.B.C.D: Session Attribute endpoint.anyconnect.publicmacaddress = "aa-bb-cc-dd-ee-ff" a field named endpoint_anyconnect_publicmacaddress was created with value aa-bb-cc-dd-ee-ff. In versión 5.0.0 KV_MODE is none, and they put an extraction in place that creates two different fields: endpoint_attribute_name with value endpoint.anyconnect.publicmacaddress endpoint_value with value aa-bb-cc-dd-ee-ff When looking to just a log this is no problem, but we typically put toghether several logs via the transaction command grouping by user, src, dvc, so all messages from the same connection are grouped. Now we get two multivalued fields with no aparent (ths might be my ignorance speaking) way to match the attribute name with the value. I've tried putting mvlist=true on the transaction command and it seems to help, but all other fields get repeated N times (for all messages that get added in the transaction). Is there a simpler way to be able to match attribute name with the corresponding value after executing transaction with mvlist=false?
Hi Team, We had couple of dashboards who created by ex-employees and existing team is unable to access them. Even we dont have access to admin privileges to access . Is there any rest query to fet... See more...
Hi Team, We had couple of dashboards who created by ex-employees and existing team is unable to access them. Even we dont have access to admin privileges to access . Is there any rest query to fetch dashbaord name and along with the query ( code ) so that we can save them as new name and use it for reference.    Thank you, SriCharan  
We have multi member search head cluster but we would like an particular add-on/app to be disabled on one search head but should be working/enabled on all the other search heads..  That particular a... See more...
We have multi member search head cluster but we would like an particular add-on/app to be disabled on one search head but should be working/enabled on all the other search heads..  That particular app needs an integration towards an external service which at the moment doesn't seem feasible to achieve due to some network limitations. Looking for something like below in local/app.conf     [install] state = disabled      Is it ok to do that? Or any other good way of achieving the same. 
Hi Team,  I am trying to run appdynamics machine agent as a container to monitor the existing app containers. I have gone through the issue discussion and added this line in my environment file.  A... See more...
Hi Team,  I am trying to run appdynamics machine agent as a container to monitor the existing app containers. I have gone through the issue discussion and added this line in my environment file.  APPDYNAMICS_SIM_ENABLED=true But still I recieve the error log: c8ebf9f96874==> [system-thread-0] 24 Jun 2022 13:04:05,719 DEBUG RegistrationTask - Encountered error during registration. com.appdynamics.voltron.rest.client.NonRestException: Method: SimMachinesAgentService#registerMachine(SimMachineMinimalDto) - Result: 401 Unauthorized - content: at com.appdynamics.voltron.rest.client.VoltronErrorDecoder.decode(VoltronErrorDecoder.java:62) ~[rest-client-1.1.0.187.jar:?] at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:156) ~[feign-core-10.7.4.jar:?] at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:80) ~[feign-core-10.7.4.jar:?] at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100) ~[feign-core-10.7.4.jar:?] at com.sun.proxy.$Proxy113.registerMachine(Unknown Source) ~[?:?] at com.appdynamics.agent.sim.registration.RegistrationTask.run(RegistrationTask.java:147) [machineagent.jar:Machine Agent v22.5.0-3361 GA compatible with 4.4.1.0 Build Date 2022-05-26 01:20:55] at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:?] at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]" Need solution on this. 
I have doubts that this Saved Search may not be properly engineered  and very taxing in terms of how time range is specified. This Saved search is basically responsible for populating a lookup.  ... See more...
I have doubts that this Saved Search may not be properly engineered  and very taxing in terms of how time range is specified. This Saved search is basically responsible for populating a lookup.  It ends with | outputlookup <lookup name> The range of the scheduled saved search is defined as,  earliest = -7d@h latest = now In the saved search there is a logic added before the last time, that filters the event based on last 90 days. The search ends Like this, .......... .......... ........... | stats min(firstTime) as firstTime , max(lastTime) as lastTime by dest , process , process_path , SHA256_Hash , sourcetype | where lastTime > relative_time(now(), "-90d") | outputlookup LookUpName ================================== My Question is, How would the search behave? Would its scan range cover last 90 days or will limit itself to 7 days. Which time range will take precedence ?
Hi, Notable events in ES can now be assigned Dispositions. I am able to create new Dispositions from the Incident Review page and enable/disable them. From the reviewsettings.conf file i can also se... See more...
Hi, Notable events in ES can now be assigned Dispositions. I am able to create new Dispositions from the Incident Review page and enable/disable them. From the reviewsettings.conf file i can also set a default one, set it to Hidden etc. However I am looking see if there is a way for Dispositions are required to be set when anyone edits a notable event from the Incident Review tab. I want to have "Unassigned" as the default one. But then require any of the others to be assigned when a notable is edited. Kind of similar to the way Comments can be set to Required. Basically i need them to be mandatory. Anyone know of a way to do this?
In the below dashboard table, I need to set colour condition of 2 columns that is is expected difference and sla_difference. if expected_difference Is negative it should show in red colour if it is ... See more...
In the below dashboard table, I need to set colour condition of 2 columns that is is expected difference and sla_difference. if expected_difference Is negative it should show in red colour if it is positive it should show in green colour. same as for sla_difference if it is negative it should be orange if it is positive it should show in green.    
Hello, I couldn't find sufficient solution at documentation nor community. I have to setup timechart, where span=1w, to start at particular day: Monday 00:00. The query looks like this (I am so... See more...
Hello, I couldn't find sufficient solution at documentation nor community. I have to setup timechart, where span=1w, to start at particular day: Monday 00:00. The query looks like this (I am sorry, I had to anonymity sensitive information): index=XXX sourcetype= YYY | eval Alrt_lvl = B_Lvl + Prio_diff | timechart span=1w count(Alrt_lvl) by Alrt_lvl Kindly please advise.
My code : <search> <query>| makeresults | eval API="party_interaction_rest" , METHOD="GET",OPERATION="Alle,LIST_PARTY_INTERACTIONS" | append [| makeresults | eval API="ticket_mgmt_rest" , METHOD="G... See more...
My code : <search> <query>| makeresults | eval API="party_interaction_rest" , METHOD="GET",OPERATION="Alle,LIST_PARTY_INTERACTIONS" | append [| makeresults | eval API="ticket_mgmt_rest" , METHOD="GET",OPERATION="Alle,LIST_TROUBLE_TICKETS"] | eval OPERATION=split(OPERATION,",") |mvexpand OPERATION| table API METHOD OPERATION | search API="$token_service$" METHOD =$token_method$ </query> </search>   In above code $token_method$ is dropdown field for which prefix is defined  as below: <input type="dropdown" token="token_method" searchWhenChanged="true"> <label>Select Method:</label> <fieldForLabel>METHOD</fieldForLabel> <fieldForValue>METHOD</fieldForValue> <search> <query>| makeresults | eval API="party_interaction_rest",METHOD="Alle,GET,POST" | append [| makeresults | eval API="ticket_mgmt_rest",METHOD="Alle,GET,POST,PATCH"] | append [| makeresults | eval API="customer_management_rest",METHOD="Alle,GET,PATCH"] | append [| makeresults | eval API="agreement_management_rest",METHOD="Alle,GET"] | append [| makeresults | eval API="product_order_rest",METHOD="Alle,GET,POST,PATCH,DELETE"] | append [| makeresults | eval API="cust_comm_rest",METHOD="Alle,GET"] | append [| makeresults | eval API="product_inv_rest",METHOD="Alle,GET,POST,PATCH"] | eval METHOD=split(METHOD,",") |mvexpand METHOD| table API METHOD | search API="$token_service$"</query> </search> <change> <condition value="Alle"> <set token="token_method">*</set> </condition> </change> <default>Alle</default> <prefix>"properties.httpMethod"=</prefix> <initialValue>Alle</initialValue> </input> So I want to ignore prefix in some case and only need value from dropdown but in some cases I need prefix .Please guide.
I have created a custom search command, using the streaming search templates provided for the splunk SDK.  It is a simple "take in the results from x field, manipulate, and add in a couple new field... See more...
I have created a custom search command, using the streaming search templates provided for the splunk SDK.  It is a simple "take in the results from x field, manipulate, and add in a couple new fields". This runs fine on my single instance development server. When I push it out to my search head cluster, I have 2 problems: 1) (Coincidence?) My Indexers all spiked in CPU around the same time i ran my search. Can a search head custom search impact the indexer? Looking at the docs, it seems like it would only impact the search head. 2) My search runs fine in a standalone instance, but in a distributed instance (SHC, Index cluster), I get this error. I dont see any more info on this -- how can I debug something so vague in splunk?  [idx01-g,idx01-k,idx02-g,idx02-k,idx03-g,idx03-k,idx04-g,idx04-k,idx05-g,idx05-k] Streamed search execute failed because: Error in 'punycode' command: External search command exited unexpectedly with non-zero error code 1.   Further investigation throws this (not on standalone, SHC only) but I changed it and am still getting this error.  
Hi, I'm trying to set up Splunk for Zscaler Nanalog Streaming Service. In the input settings, I can't choose zscalernss. There's only ta_zscaler_api_zscaler_zia_configurations-too_small, ta_zscale... See more...
Hi, I'm trying to set up Splunk for Zscaler Nanalog Streaming Service. In the input settings, I can't choose zscalernss. There's only ta_zscaler_api_zscaler_zia_configurations-too_small, ta_zscaler_api_zscaler_zpa_configurations-too_small, zscaler:zia:api, and zscaler:zpa:api. How should I go about this? Thanks!
Hi All, I got a request to monitor a log files in splunk. below are the log file name pattern: abc_uat_cpe_220614.log abc_dev_cpe_220615.log abc_train_cpe_220616.log and so on.. I have co... See more...
Hi All, I got a request to monitor a log files in splunk. below are the log file name pattern: abc_uat_cpe_220614.log abc_dev_cpe_220615.log abc_train_cpe_220616.log and so on.. I have configured inputs lik shown below: [monitor:///usr/local/bsl_export/abc_*_cpe_*.log] index=abdxj sourcetype=bsl_export:cpe disabled = 0 But i am not getting any logs in splunk checked all the things mentioned below: Splunk service is running spunk user had read access Firewall connections are all good has latest logs files with enough size to read Restarted the splunk service still same issue,  Checked _internal logs under log_level=WARN i see below message: AutoloadBalancedConnectstrategy - Cooked connection to ip timed out But connection is fine as i have checked it already. When i run the below command it gives output as "hangup" splunk list inputstatus props is as below: ########### JIRA link ########### [bsl_export:cpe] ############ Can anyone please help me on this?    
Hi All, I have a set of folders which are created by the job which runs in the backend and the names of the folders keep changing automatically looks something like below:   hs21dsb:hs54:bsds... See more...
Hi All, I have a set of folders which are created by the job which runs in the backend and the names of the folders keep changing automatically looks something like below:   hs21dsb:hs54:bsds:hasn542sbsb hshs21:nansh2225:haan53333 and so on.. i want to monitor just the latest folder which is created newly in the list and ignore rest of all folders. how to achieve this in monitor stanza?