All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Experts, Trying to install the Add-on for McAfee Web Gateway via the GUI method. Keep getting this error from the page below. by any chance, if anyone knows how to solve it, is there something... See more...
Hi Experts, Trying to install the Add-on for McAfee Web Gateway via the GUI method. Keep getting this error from the page below. by any chance, if anyone knows how to solve it, is there something needed to be tweaked for this to work. There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and Splunk_TA_mcafee-wg Appreciate any help. https://splunkbase.splunk.com/app/3009/  
Currently we're running our whole environment on a local splunk account for SH, IDX, CM, DS, etc, all on linux os.  Is it possible to run different components with different service accounts? Would ... See more...
Currently we're running our whole environment on a local splunk account for SH, IDX, CM, DS, etc, all on linux os.  Is it possible to run different components with different service accounts? Would it be possible to use a different service account, using AD network group for the DS and leave all other components as is (using a local user account)?   
I am using a HEC and configured a custom source type that sets _time based on a field in the JSON data and when using the "add data" sample data, it works great.  _time gets updated, however, when ac... See more...
I am using a HEC and configured a custom source type that sets _time based on a field in the JSON data and when using the "add data" sample data, it works great.  _time gets updated, however, when actually sending data to the HEC, _time stays at indexed time (not the _time based on the data). To give the concrete example, in the JSON i have this line: "timestampStr": "2022-06-03 19:38:19.736995059",   And built this sourcetype: [_j_son_logan_test] DATETIME_CONFIG = LINE_BREAKER = \}()\{ NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false BREAK_ONLY_BEFORE_DATE = SHOULD_LINEMERGE = false TIME_PREFIX = \"timestampStr\": \" TIME_FORMAT = KV_MODE = json INDEXED_EXTRACTIONS = json And when using the Settings --> Add Data option, and selecting that Source Type, _time shows as 2022-06-03 19:38:19.736995059 However, when I sent that json blob via curl to the HEC (which is set to a particular index and to use that sourcetype), the _time value shows the time it was index (i.e. right now (2022-06-24)). In looking at the data itself, (index="my_index"), the sourcetype column shows _j_son_logan_test Not sure what to check next, but open to thoughts and thank you!
Hello all, Referring to the previous post of: https://community.splunk.com/t5/Installation/Does-kvstore-upgrade-from-8-0-to-8-2-needs-to-be-done-on-the/m-p/603172#M11665 I had tried to upgrade ... See more...
Hello all, Referring to the previous post of: https://community.splunk.com/t5/Installation/Does-kvstore-upgrade-from-8-0-to-8-2-needs-to-be-done-on-the/m-p/603172#M11665 I had tried to upgrade our kvstore to wiredtiger on our license server/cluster master and our deployer. Here are the errors:  Am I not supposed to be upgrading them then? I was following this advice from the previous community answer I received.  Help? I am just trying to do this upgrade correctly. The reason I was doing this is because I was getting errors on our GUI about upgrading the kvstore after I had upgraded the search heads kvstore.
I'm trying to search for a string from a lookup table that has wildcards and spaces.   For example, if I have a field named firewall_string_field that has the following value: random text randomt... See more...
I'm trying to search for a string from a lookup table that has wildcards and spaces.   For example, if I have a field named firewall_string_field that has the following value: random text randomtext random My File Name With Spaces.doc random randomrandom My lookup table named my_special_lookup.csv Field1 "*My File Name With Spaces.doc*" "*Second File Name With Spaces.doc*"   My query looks like: index=firewall [|inputlookup my_special_lookup.csv | fields Field1 | rename Field1 AS firewall_string_field] I get no results.   I get results if I do a simple search like: index=firewall firewall_string_field="*My File Name With Spaces.doc*" I tried creating a lookup definition with matchtype WILDCARD(Field1) but am still getting no results.  
Hello, I have logs in two index,   Index=flow_log Fields required, src_ip, src_port, dest_ip, dest_port, network interface   Index=config src_ip, network interface, security group ID , secu... See more...
Hello, I have logs in two index,   Index=flow_log Fields required, src_ip, src_port, dest_ip, dest_port, network interface   Index=config src_ip, network interface, security group ID , security group name   In both the index src_ip and network interface information are common, I wanted to make a dashboard with these index and below fields. how do I combine these different fields  in one dashboard. network interface src_ip  src_port  dest_ip  dest_port security group id  security group name. Please help.    
How can we find out volume of logs queried in Splunk
Hello, I have a dashboard with a couple of input dropdowns. Can I use the same input dropdowns in a different dashboard? The first dashboard input dropdowns should change the second dashboard input d... See more...
Hello, I have a dashboard with a couple of input dropdowns. Can I use the same input dropdowns in a different dashboard? The first dashboard input dropdowns should change the second dashboard input dropdowns.   Thank you
Hi I would like to replace the Splunk self signed certificate on a heavy forwarder for Splunk Web and found a document called  "Configure Splunk Web to use TLS certificates".  We want to use a vali... See more...
Hi I would like to replace the Splunk self signed certificate on a heavy forwarder for Splunk Web and found a document called  "Configure Splunk Web to use TLS certificates".  We want to use a valid signed certificate so user's don't get the untrusted web site warning from their browsers. Will changing just the Splunk Web ssl certificate have any effect on the secure communications between Splunk Enterprise components?  If someone can point me in the right direction, that would be great!   Thanks Tim
Hello. I'm super new to Splunk(love to tool for assessing JuniperFW logs) but I'm being tasked at a new job with something out of my zone. I'm a Palo Alto guy but my boss would like our Forcepoint lo... See more...
Hello. I'm super new to Splunk(love to tool for assessing JuniperFW logs) but I'm being tasked at a new job with something out of my zone. I'm a Palo Alto guy but my boss would like our Forcepoint logs to run through this app. I have it installed but this string tstats average =false count FROM datamodel=mail_log  gives me an error saying it can't find that datamodel. Now this model is fully operational under PP for Proofpoint with all permissions available to all. I did check that. Can anyone help me with this? I'm using Splunk Enterprise 8.2.4 and DomainTools 4.3.0. Thank you in advance!
What is the use of (index cim sourcetype modular:alert:risk). What happens if it stops generating logs?
https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/MigrateKVstore#Upgrade_KV_store_server_to_version_4.2 Upgraded Splunk Enterprise version 9.0.0 from 8.2.5 Looking to see how to upgrade m... See more...
https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/MigrateKVstore#Upgrade_KV_store_server_to_version_4.2 Upgraded Splunk Enterprise version 9.0.0 from 8.2.5 Looking to see how to upgrade mongo from 4.0 to 4.2 on a single instance deployment.  During the Splunk Enterprise upgrade the migration to wiredTiger failed due to lack of disk space, the upgrade still continued and made the first hop of the mongo upgrade from version 3.6 to 4.0, it looks like after version 4.0 it tried to do the engine migration but couldn't because the lack of available disk space and therefore didn't do the last hop to version 4.2 of mongo.  We have since fixed the disk space issue and were able to complete the engine migration to wiredTiger, however don't know how to bump up the mongo version to 4.2.  The above link is for upgrading mongo in a cluster but not on a single instance, when looking at the options in splunk help kvstore I don't see anything for upgrading mongo either for a single instance, tried splunk start-shcluster-upgrade kvstore -version 4.2 -isDryRun true but of course it detected it wasn't a searchhead cluster.  Lastly trying to understand the difference in the output of mongo versionsbetween kvstore-status command versus splunk cmd mongod -version, clearly pulling from two different places. [App Key Value Store migration] Starting migrate-kvstore. Created version file path=/opt/splunk/var/run/splunk/kvstore_upgrade/versionFile36 Started standalone KVStore update, start_time="2022-06-22 15:21:46". [App Key Value Store migration] Checking if migration is needed. Upgrade type 1. This can take up to 600seconds. [App Key Value Store migration] Migration is not required. Created version file path=/opt/splunk/var/run/splunk/kvstore_upgrade/versionFile40 Not enough space to upgrade KVStore (or backup). You will need requiredBytes=3107201024 bytes, but KV Store DB filesystem only has availableBytes=2286272512 [App Key Value Store migration] Starting migrate-kvstore. [App Key Value Store migration] Storage Engine hasn't been migrated to wireTiger. Cannot upgrade to service(42) [splunk ~/var/run/splunk/kvstore_upgrade]$ splunk show kvstore-status --verbose |grep serverVersion serverVersion : 4.0.24 [splunk ~/var/run/splunk/kvstore_upgrade]$ [splunk ~/var/run/splunk/kvstore_upgrade]$ splunk cmd mongod -version db version v4.2.17-linux-splunk-v3 git version: be089838c55d33b6f6039c4219896ee4a3cd704f OpenSSL version: OpenSSL 1.0.2zd-fips 15 Mar 2022 allocator: tcmalloc modules: none build environment: distmod: rhel62 distarch: x86_64 target_arch: x86_64 [splunk ~/var/run/splunk/kvstore_upgrade]$
Hi Good Morning , Web UI in indexer is not starting though the following setting in place in ..../system/default/web.conf   [settings] # enable/disable the appserver startwebserver = 1 # First ... See more...
Hi Good Morning , Web UI in indexer is not starting though the following setting in place in ..../system/default/web.conf   [settings] # enable/disable the appserver startwebserver = 1 # First party apps: splunk_dashboard_app_name = splunk-dashboard-studio # enable/disable splunk dashboard app feature enable_splunk_dashboard_app_feature = true # port number tag is missing or 0 the server will NOT start an http listener # this is the port used for both SSL and non-SSL (we only have 1 port now). httpport = 8000   Then added a ..../system/local/web.conf with the following to see if it enable but still the web UI is disabled: enableSplunkWebSSL = true   Any help is greatly appreciated
Hi all, day1 splunker here.  I'd like to use an ingested start and stop time in index BLUE and use it to range-filter events in from index RED.   using the splunk event _time on the RED is ok.   Just... See more...
Hi all, day1 splunker here.  I'd like to use an ingested start and stop time in index BLUE and use it to range-filter events in from index RED.   using the splunk event _time on the RED is ok.   Just a nudge in the right direction is what I'm after... thanks all
Hi Team,   Is there any way to use REST syntax and retrieve the following. 1. Rest Query to retrieve all unique searches performed on a given index and count no of times it was searched    
We recently upgrade the Add-on for Cisco ASA from versión 3.4.0 to 5.0.0. In versión 3.4.0 KV_MODE was set to Auto and this meant that a lot of informatión from messages from the DAP (734*) was extr... See more...
We recently upgrade the Add-on for Cisco ASA from versión 3.4.0 to 5.0.0. In versión 3.4.0 KV_MODE was set to Auto and this meant that a lot of informatión from messages from the DAP (734*) was extracted into a named field. I.e. for this log: Jun 24 13:52:39 fwhost %ASA-7-734003: DAP: User username, Addr A.B.C.D: Session Attribute endpoint.anyconnect.publicmacaddress = "aa-bb-cc-dd-ee-ff" a field named endpoint_anyconnect_publicmacaddress was created with value aa-bb-cc-dd-ee-ff. In versión 5.0.0 KV_MODE is none, and they put an extraction in place that creates two different fields: endpoint_attribute_name with value endpoint.anyconnect.publicmacaddress endpoint_value with value aa-bb-cc-dd-ee-ff When looking to just a log this is no problem, but we typically put toghether several logs via the transaction command grouping by user, src, dvc, so all messages from the same connection are grouped. Now we get two multivalued fields with no aparent (ths might be my ignorance speaking) way to match the attribute name with the value. I've tried putting mvlist=true on the transaction command and it seems to help, but all other fields get repeated N times (for all messages that get added in the transaction). Is there a simpler way to be able to match attribute name with the corresponding value after executing transaction with mvlist=false?
Hi Team, We had couple of dashboards who created by ex-employees and existing team is unable to access them. Even we dont have access to admin privileges to access . Is there any rest query to fet... See more...
Hi Team, We had couple of dashboards who created by ex-employees and existing team is unable to access them. Even we dont have access to admin privileges to access . Is there any rest query to fetch dashbaord name and along with the query ( code ) so that we can save them as new name and use it for reference.    Thank you, SriCharan  
We have multi member search head cluster but we would like an particular add-on/app to be disabled on one search head but should be working/enabled on all the other search heads..  That particular a... See more...
We have multi member search head cluster but we would like an particular add-on/app to be disabled on one search head but should be working/enabled on all the other search heads..  That particular app needs an integration towards an external service which at the moment doesn't seem feasible to achieve due to some network limitations. Looking for something like below in local/app.conf     [install] state = disabled      Is it ok to do that? Or any other good way of achieving the same. 
Hi Team,  I am trying to run appdynamics machine agent as a container to monitor the existing app containers. I have gone through the issue discussion and added this line in my environment file.  A... See more...
Hi Team,  I am trying to run appdynamics machine agent as a container to monitor the existing app containers. I have gone through the issue discussion and added this line in my environment file.  APPDYNAMICS_SIM_ENABLED=true But still I recieve the error log: c8ebf9f96874==> [system-thread-0] 24 Jun 2022 13:04:05,719 DEBUG RegistrationTask - Encountered error during registration. com.appdynamics.voltron.rest.client.NonRestException: Method: SimMachinesAgentService#registerMachine(SimMachineMinimalDto) - Result: 401 Unauthorized - content: at com.appdynamics.voltron.rest.client.VoltronErrorDecoder.decode(VoltronErrorDecoder.java:62) ~[rest-client-1.1.0.187.jar:?] at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:156) ~[feign-core-10.7.4.jar:?] at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:80) ~[feign-core-10.7.4.jar:?] at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100) ~[feign-core-10.7.4.jar:?] at com.sun.proxy.$Proxy113.registerMachine(Unknown Source) ~[?:?] at com.appdynamics.agent.sim.registration.RegistrationTask.run(RegistrationTask.java:147) [machineagent.jar:Machine Agent v22.5.0-3361 GA compatible with 4.4.1.0 Build Date 2022-05-26 01:20:55] at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:?] at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]" Need solution on this. 
I have doubts that this Saved Search may not be properly engineered  and very taxing in terms of how time range is specified. This Saved search is basically responsible for populating a lookup.  ... See more...
I have doubts that this Saved Search may not be properly engineered  and very taxing in terms of how time range is specified. This Saved search is basically responsible for populating a lookup.  It ends with | outputlookup <lookup name> The range of the scheduled saved search is defined as,  earliest = -7d@h latest = now In the saved search there is a logic added before the last time, that filters the event based on last 90 days. The search ends Like this, .......... .......... ........... | stats min(firstTime) as firstTime , max(lastTime) as lastTime by dest , process , process_path , SHA256_Hash , sourcetype | where lastTime > relative_time(now(), "-90d") | outputlookup LookUpName ================================== My Question is, How would the search behave? Would its scan range cover last 90 days or will limit itself to 7 days. Which time range will take precedence ?