All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How can I created dashboard for my entities like image belong in IT Essentials work app. I download the manuel named: - Entity Integrations Manual - Overview of Splunk IT Essentials Work But ... See more...
How can I created dashboard for my entities like image belong in IT Essentials work app. I download the manuel named: - Entity Integrations Manual - Overview of Splunk IT Essentials Work But i can not find my answer Thank you for your help
Hi, I have a problem finding answers about the failure of a universal forwarder to re-ingest an XML file. 02-08-2023 11:11:40.348 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr... See more...
Hi, I have a problem finding answers about the failure of a universal forwarder to re-ingest an XML file. 02-08-2023 11:11:40.348 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. This is an XML file. It is created as a small file. Eventually, an application will re-write this file with a temporary name before renaming it to this same name. This can be seconds after it is created or after many minutes or even hours. My problem is that this event suggests that the forwarder knows that the file has changed but the new content of the file is not ingested. It will be ingested as expected if I manually modify the top of the file later. At that point, I see: 02-08-2023 16:21:51.439 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. 02-08-2023 16:21:51.439 +0000 INFO WatchedFile [10392 tailreader0] - Will begin reading at offset=0 for file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. And the new version of the file is finally available. This is a universal forwarder. This is a Linux server. The new version of the XML file is 2233 bytes long. The length on the file does not seem to be a problem. A transform exists on the indexers to load the content as one event. This works fine. I do not believe my problem is related to initCrcLength as it did notice the file has changed. I blacklist the name of the temporary file. Switching "multiline_event_extra_waittime” true or false does not help. The ingestion and re-ingestion works fine most of the times. Maybe one every 20 files do not get re-ingested as expected. And it is usually the ones that are re-written few seconds after it got created. My question is the following: why is the file sometimes not re-indexed if the forwarder says it will do it? I can see that there can be a timing/race condition at play but the logs do not show anything other than the INFO records. Would changing the debugging level help? What other parameter in the input could help if this is a timing problem? I failed finding a solution online because pretty much all conversations related to this INFO message are about stopping the file re-ingestion. So I have not been successful in finding my needle. Any advice is welcomed. Thanks.
Within Splunk:Soar(Phantom), is there a way to have a prompt message pop-up for the user running the playbook, as opposed to having to go up to notifications then click the prompt notification to ope... See more...
Within Splunk:Soar(Phantom), is there a way to have a prompt message pop-up for the user running the playbook, as opposed to having to go up to notifications then click the prompt notification to open it.   The idea being that when certain playbooks are launched from workbooks, the prompt will supply the user with a form that will affect the actions/outcomes of the playbook.  While having the promp directly pop-up isn't necessary for this, it would be QOL and help with confusion of new users. I could have sworn I saw this type of feature during a SOAR demo by Splunk, but can't find it documented anywhere if it exists.
I've configured a HEC to receive events from a Telegraf emitter, which provides metrics in the form: {"time":1676415410,"event":"metric","host":"VaultNonProd-us-east-2a","index":"vault-metrics","f... See more...
I've configured a HEC to receive events from a Telegraf emitter, which provides metrics in the form: {"time":1676415410,"event":"metric","host":"VaultNonProd-us-east-2a","index":"vault-metrics","fields":{"_value":0.022299762544379577,"cluster":"vault_nonprod","datacenter":"us-east-2","metric_name":"vault.raft.replication.heartbeat.NonProd-us-east-2b-d992bf60.stddev","metric_type":"timing","role":"vault-server"}} All of the fields come across from the HF to our indexers except the one we're most interested, the _value field.  Searching around, I found https://docs.splunk.com/Documentation/DSP/1.3.1/Connection/IndexEvent which, in part, states that "Entries that are not included in fields include: any key that starts with underscore (such as _time)" Is it possible to include an underscore-starting field in the forwarded event? Thanks
I am trying to create a query to get the sum of multiple fields by a field.    index="*****" |stats sum(field_A) as  A by field_C,sum(field_B) as B  by field_C | table field_C, field_A,field_B... See more...
I am trying to create a query to get the sum of multiple fields by a field.    index="*****" |stats sum(field_A) as  A by field_C,sum(field_B) as B  by field_C | table field_C, field_A,field_B   This query is giving error. 
I am using Splunk searching old log files and the _time is different from log time, would this make sense or do I have to parse the log to set  _time to log time? Thanks.
How do i verify the forwarder is sending data to the Indexer? What search do i need to perform other then Forwarder Management?
Hello, I'm a new Splunk Compliance Manager and I need some assistance. How do i check Splunk Compliance and how do i better manage licensing?   Thanks, Rodney
Hi all,  Splunk newbie with what I hope is a simple question... I have a UF installed on my windows file server, and it is set to monitor a directory--see below [WinEventLog://Security] check... See more...
Hi all,  Splunk newbie with what I hope is a simple question... I have a UF installed on my windows file server, and it is set to monitor a directory--see below [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest monit [WinEventLog://System] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest [monitor://D:\documents\Confidential] disabled = false  The intent is for it to report access/modifications/deletions to files in that directory, but I am not getting any file monitoring activity returned to my splunk server when I perform a simple query for the windows host.  I do get all the system and security events, though. Any ideas on why I'm not getting the file monitoring activity?  Thanks!
02-10-2022 09:00:35.120 -0500 INFO TailingProcessor [5728 MainTailingThread] - Adding watch on path: C:\.
I need to create a search (or an embedded search that feeds data to another search.  What I'm trying to get is a search like  |tstats values(host) where index=* by index which might feed to a spread ... See more...
I need to create a search (or an embedded search that feeds data to another search.  What I'm trying to get is a search like  |tstats values(host) where index=* by index which might feed to a spread sheet that has server and host and then another search on top of it to match up host with index. (NOT indexers) |tstats values(host) where index=* by index
Hello to all. I am using the CEF Extraction TA for extracting CEF fields in a FireEye log.  When I test this on a standalone system with Indexer and Search Head, the cs#Label fields extract correc... See more...
Hello to all. I am using the CEF Extraction TA for extracting CEF fields in a FireEye log.  When I test this on a standalone system with Indexer and Search Head, the cs#Label fields extract correctly. As soon as I put this in an environment with a Heavy Forwarder, Indexer, and Search Head distributed (or even just Indexer and Search Head)., the fields will not extract.   I am at my wit's end here. Help?  Thanks!
Hi, Recently I received a warning message like the following when installing Enterprise Console on a Linux machine. Apparently because of that I'm not able to logon to the Enterprise Console. ... See more...
Hi, Recently I received a warning message like the following when installing Enterprise Console on a Linux machine. Apparently because of that I'm not able to logon to the Enterprise Console. FirewallD status is not running, so I assume there's no blocking. Tried to retry the installation several times, but always the same result. Any thoughts on this one?
Existing release of signalfx-tracing uses "tar" package v4 which has the following vulnerability. tar package versions before 6.1.4 are vulnerable to Regular Expression Denial of Service (ReDoS). Wh... See more...
Existing release of signalfx-tracing uses "tar" package v4 which has the following vulnerability. tar package versions before 6.1.4 are vulnerable to Regular Expression Denial of Service (ReDoS). When stripping the trailing slash from files arguments, we were using f.replace(/\\/+$/, \'\'), which can get exponentially slow when f contains many / characters. This is ""unlikely but theoretically possible"" because it requires that the user is passing untrusted input into the tar.extract() or tar.list() array of entries to parse/extract, which would be quite unusual. As a security first policy in our organization we strive to keep updating to the latest fixes for all vulnerable packages. We are currently blocked because we use signalfx-tracing@latest. We need signalfx-tracing to update to the latest version of tar and release a package with no other breaking changes in the package. here is more information about the needed package of tar   For more details refer this GitHub link:https://github.com/signalfx/signalfx-nodejs-tracing/issues/97
Hello Everyone, I have a requirement where I have to generate a query.  event 1 : <l:event dateTime="2023-02-10 11:28:49.299"......some ******data*****<ns2:orderNumber>111040481</ns2:orderNumber>... See more...
Hello Everyone, I have a requirement where I have to generate a query.  event 1 : <l:event dateTime="2023-02-10 11:28:49.299"......some ******data*****<ns2:orderNumber>111040481</ns2:orderNumber>*****some****data****<ns2:customerType>B2C</ns2:customerType>   event 2 : event dateTime="2023-02-15 11:28:49.299"......some ******data*****<ns2:orderNumber>111040481</ns2:orderNumber>*****some****data****   I have to fetch OrderNumber from event2, and  CustomerType from event1... As ordernumber is unique.. Since event1 and event2 are on different date, can we write a query to get a report ?
Hi there everyone, I'm a fresh beginner at Splunk SDK, was exploring stuff and tried to add a user using the library and the below code and it ended up with the error at the bottom.    import spl... See more...
Hi there everyone, I'm a fresh beginner at Splunk SDK, was exploring stuff and tried to add a user using the library and the below code and it ended up with the error at the bottom.    import splunklib.client as client HOST = HOST PORT = 8089 BEARER_TOKEN =BEARER_TOKEN # Create a Service instance and log in try: service = client.connect( host=HOST, port=PORT, splunkToken=BEARER_TOKEN, verify=False ) if service: print("connected....) except Exception as e: print(e) try: service.users.create( username="test123", password="test321", roles="admin" ) except Exception as e: print(f"and as it goeeees {e}")   TimeoutError: [Errno 60] Operation timed out     suggestions or guidance would be awesome if any Sincerely Haydar
we have upgraded splunk Enterprise to 8.1 and Alert manager to 3.1.11. After upgrade, alerts are not getting auto assigned and keeps on sitting with "new" status and hence not getting processed. ... See more...
we have upgraded splunk Enterprise to 8.1 and Alert manager to 3.1.11. After upgrade, alerts are not getting auto assigned and keeps on sitting with "new" status and hence not getting processed.   Any leads to this problem will be appreciated. Thanks
We use Splunk cloud and one on-premises HF Using Splunk_TA_juniper in Splunk cloud, we get Juniper logs as syslogs What I need to do to do field attraction
Hello. I am trying to test app version before updating it in test environemnt but I receive an error after running command : ./splunk apply shcluster-bundle -target https://vxxxxxxx:8089 Er... See more...
Hello. I am trying to test app version before updating it in test environemnt but I receive an error after running command : ./splunk apply shcluster-bundle -target https://vxxxxxxx:8089 Error  :     WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Error while creating deployable apps: Error copying src="/opt/splunk/etc/shcluster/apps/Splunk_TA_snow" to staging area="/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow". 9 errors occurred. Description for first 9: [{operation:"copying source to destination", error:"Permission denied", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/metadata/local.meta", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/metadata/local.meta"}, {operation:"transfering contents from source to destination", error:"Permission denied", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/metadata", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/metadata"}, {operation:"copying source to destination", error:"Permission denied", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/local/passwords.conf", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/local/passwords.conf"}, {operation:"copying source to destination", error:"Permission denied", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/local/splunk_ta_snow_settings.conf", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/local/splunk_ta_snow_settings.conf"}, {operation:"copying source to destination", error:"Permission denied", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/local/splunk_ta_snow_account.conf", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/local/splunk_ta_snow_account.conf"}, {operation:"transfering contents from source to destination", error:"Permission denied", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/local", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/local"}, {operation:"copying source to destination", error:"Permission denied", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/lookups/snow_cmdb_ci_list.csv", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/lookups/snow_cmdb_ci_list.csv"}, {operation:"transfering contents from source to destination", error:"No such file or directory", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow/lookups", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow/lookups"}, {operation:"transfering contents from source to destination", error:"No such file or directory", src:"/opt/splunk/etc/shcluster/apps/Splunk_TA_snow", dest:"/opt/splunk/var/run/splunk/deploy.618738529d3d8db4.tmp/apps/Splunk_TA_snow"}]     Looks like permissions denied. I tried with both root user and splunk user error stays the same.  
Hello, I am looking for some guidance on licensing, please. AppD has two licensing models for Commerical SaaS: ABL and IBL 1) Is it possible to convert a customer licensing from ABL to IBL? 2) If... See more...
Hello, I am looking for some guidance on licensing, please. AppD has two licensing models for Commerical SaaS: ABL and IBL 1) Is it possible to convert a customer licensing from ABL to IBL? 2) If this is possible, would the controller(s) need to be re-configured? And would this mean effectively starting over in terms of application mapping, health alerts, dashboards, and non-out-of-the-box instrumentation? The License Entitlements and Restrictions page does not cover this: License Entitlements and Restrictions (appdynamics.com) 3) Both ABL and IBL licensing models are orderable via Cisco Commerce. However, it appears that for Cisco Enterprise Agreements 3.0, only IBL licensing (Enterprise and Premium tiers) is covered, and not ABL (Peak, Pro, Advanced). Does this mean that you can only move a customer to an EA if they are licensed for IBL? Appreciate your input. Thanks