All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This line in $SPLUNK_HOME/lib/python3.7/site-packages/splunk/clilib/cli_common.py was the source of an error when configuration initialization is slow...   if err: logger.error( 'Faile... See more...
This line in $SPLUNK_HOME/lib/python3.7/site-packages/splunk/clilib/cli_common.py was the source of an error when configuration initialization is slow...   if err: logger.error( 'Failed to decrypt value: {}, error: {}'.format(value, err)) return None return out.strip()   There is a wallclock message that gets in the middle of the decrypt operation causing it to fail. Changed code to this, and problem went away.   if 'took wallclock_ms' no in err: logger.error( 'Failed to decrypt value: {}, error: {}'.format(value, err)) return None return out.strip()  
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work q... See more...
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work quite fine and I was quickly able to see metrics in my splunk index. However, what I am completely missing are the labels of those prometheus metrics in Splunk. Here an example of some of the metrics I scrape:   # HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter. # TYPE jmx_exporter_build_info gauge jmx_exporter_build_info{version="0.20.0",name="jmx_prometheus_javaagent",} 1.0 # HELP jvm_info VM version info # TYPE jvm_info gauge jvm_info{runtime="OpenJDK Runtime Environment",vendor="AdoptOpenJDK",version="11.0.8+10",} 1.0 # HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded. # TYPE jmx_config_reload_failure_total counter jmx_config_reload_failure_total 0.0 # HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. # TYPE jvm_gc_collection_seconds summary jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 883.0 jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 133.293 jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 0.0 jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.0 # HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. # TYPE jvm_memory_pool_allocated_bytes_total counter jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 6.76448896E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Old Gen",} 1.345992784E10 jvm_memory_pool_allocated_bytes_total{pool="G1 Eden Space",} 9.062406160384E12 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.38238592E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Survivor Space",} 1.6919822336E10 jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 1.41419488E8 jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 1.141665096E9 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 3544448.0   I do see the values in Splunk, but especially for the last metric "jvm_memory_pool_allocated_bytes_total" the label of which pool is lost in splunk. Is this intentional or am I missing something. The getting started page for metrics also has no information on where those labels are stored and how I could query based on them (https://docs.splunk.com/Documentation/Splunk/latest/Metrics/GetStarted)   tia,     Jörg
Hi @Rosie2287, if you want to list the accounts used in the last 90 days that weren't used in the last 35 days, you could run something like this: I could be more detailes knowing which kind of log... See more...
Hi @Rosie2287, if you want to list the accounts used in the last 90 days that weren't used in the last 35 days, you could run something like this: I could be more detailes knowing which kind of logs yu want to monitor, are they Windows? in this case I use index=wineventlog and EventCode=4624. index=wineventlog EventCode=4624 earliest=-90d latest=now | eval period=if(_time>now()-35*86400,"Last","Previous") | stats dc(period) AS period_count values(period) AS period BY Account_name | where period_count=1 AND period="Previous" | table Account_name Ciao. Giuseppe  
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. ... See more...
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. Field1 Field2 28 28 100 99 33 56 18 18
Is there a Splunk query I can use to list when CD drive is access and written to and the users associated with those actions made
Is there a query I can add to my splunk dashboard that will list accounts inactive over 35 days?
Hello @richgalloway  I created Splunk account to the user and gave URL en-US/account/login?loginType=splunk but getting Bad Request    
and what could be a solution?
You do not have to do anything to enable_insecure_login to allow external users to use your Splunk.  Just add a Splunk account for them and give them the loginType URL.
Hi @AvivBenSha , as from its name, Heavy Forwarder forwards data ro the indexers, so it isn't involved in the indexing phase. In this phase, Indexers store _raw data and index events. HFs are only... See more...
Hi @AvivBenSha , as from its name, Heavy Forwarder forwards data ro the indexers, so it isn't involved in the indexing phase. In this phase, Indexers store _raw data and index events. HFs are only involved in the input, merge, typing and parsing phases, not in indexing phases. Instead UFs are only involved in the input phase, not in the others. Ciao. Giuseppe
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I search... See more...
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I searched in community and found that we can use en-US/account/login?loginType=splunk after changing enable_insecure_login = False in web.conf I'm little worried about the consequences after I change the above setting.  Is there any way to provide access to external users without any concerns with security. Thank you in advance!
I'm looking to export Service from Splunk ITSI however, there is no direct export feature in the GUI (at least within the Services page). Is there any other way to export ITSI services?
I tried changing it but didnt worked. Somehow i managed to receive the PEM file but when applying the Certificate, its not working. any help on providing steps to configure will be much appreciated.
Heavy Forwarders parse data if they are the first full instance to see that data.  It does everything an indexer would do *except* write the data to disk.  HFs do not "index" any data - that's done b... See more...
Heavy Forwarders parse data if they are the first full instance to see that data.  It does everything an indexer would do *except* write the data to disk.  HFs do not "index" any data - that's done by indexers.
The preferences setting in the UI only affects how times are displayed in the UI.  It has no effect on props.conf or the data flowing through the HF.
There should be no overlap between the TIME_PREFIX and TIME_FORMAT settings.  Splunk skips past TIME_PREFIX and then starts looking for text that matches TIME_FORMAT.  Since "<Data Name='date'>" does... See more...
There should be no overlap between the TIME_PREFIX and TIME_FORMAT settings.  Splunk skips past TIME_PREFIX and then starts looking for text that matches TIME_FORMAT.  Since "<Data Name='date'>" doesn't appear twice in a row, there is no match for the timestamp.  Try these settings: TIME_PREFIX = <Data Name='date'> MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT = %Y-%m-%d</Data><Data Name='time'>%H:%M:%S TZ = UTC  
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unl... See more...
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unlike the universal forwarder, the heavy forwarder does part of the indexing himself. So what exactly does it index? does he segment the raw data to the tsidx file and sends them both to the indexer?
Yup, that would work too. Sysmon - Configuration Files  Sysmon - Event Filtering 
Thanks a lot @gcusello ! I just created a search to create that CSV used in your query. | ldapsearch domain=default search="(objectClass=computer)" | table name | rename name as host | outputloo... See more...
Thanks a lot @gcusello ! I just created a search to create that CSV used in your query. | ldapsearch domain=default search="(objectClass=computer)" | table name | rename name as host | outputlookup append=false monitored_hosts.csv and I run your query using the monitored_hosts.csv. It works flawless! thanks once again.  
Why use blacklists at all?  Sysmon has excellent means of filtering what you want logged and not.