All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @richgalloway  Yes, I appended that to my instance URL and got that bad request.
Hi @Kamal.Manchanda, Thank you so much for coming back and sharing the info with the community. 
Ok thank you.  I am not sure about which events report CD Drive actions.  I was just wondering if there was a general dashboard query that could be used to identify cd drive usage.
Hello, I have this data here: 2024-04-03 13:57:54 10.237.8.167 GET / "><script>alert('struts_sa_surl_xss.nasl-1712152675')</script> 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Window... See more...
Hello, I have this data here: 2024-04-03 13:57:54 10.237.8.167 GET / "><script>alert('struts_sa_surl_xss.nasl-1712152675')</script> 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 2 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 0 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /Default.aspx - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /home.jsf autoScroll=0%2c275%29%3b%2f%2f--%3e%3c%2fscript%3e%3cscript%3ealert%28%27myfaces_tomahawk_autoscroll_xss.nasl%27 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /admin/statistics/ConfigureStatistics - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 2 10.236.125.4 It is not line breaking properly as expected for our IIS logs.  This is what I currently have for our sourcetype stanza on the indexer.     [iis] LINE_BREAKER = ([\r\n]+)\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} SHOULD_LINEMERGE = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19  
"en-US/account/login?loginType=splunk" is the tail end of the URL.  Append it to your standard Splunk URL (https://<<my splunk>>/en-US/account/login?loginType=splunk).
If the system logs have been ingested into Splunk, you need to identify which events in those logs include the information you are looking for. You can then tell Splunk how to pull out those events s... See more...
If the system logs have been ingested into Splunk, you need to identify which events in those logs include the information you are looking for. You can then tell Splunk how to pull out those events so you can report on them in your dashboard. We do not have access to your data, it is only something that you can determine
Our splunk server keeps the logs for a lot longer.  Sorry I was unclear.  
New splunk user here -  No, I was looking for a query I could add to my dashboard that would look in system logs that would check for when the CD drive is accessed or burned to.  
If the information has been deleted, Splunk can't report on it.
Is this information in a log somewhere that you have ingested into Splunk?
One way is to use CSS and multivalue fields where the second value in the multivalue field is used to determine the colour See the reply here for an example How to color the columns based on previou... See more...
One way is to use CSS and multivalue fields where the second value in the multivalue field is used to determine the colour See the reply here for an example How to color the columns based on previous column... - Splunk Community  
Well, it now becomes a balancing act. Your particular event to a little over 5 minutes from the _time in the event to the time it was indexed, so you could gamble and change your alert so that every ... See more...
Well, it now becomes a balancing act. Your particular event to a little over 5 minutes from the _time in the event to the time it was indexed, so you could gamble and change your alert so that every 5 minutes it looks back between 10 minutes ago and 5 minutes ago. That way you will probably get all the events for that time period, but the problem here is that they will be at least 5 minutes late and upto 10 minutes late. Another option is to look back 10 minutes but your run the risk of double counting your alerts i.e. an event could fall into two searches. This may not be a problem for you - that is for you to decide. An enhancement to this is to write the events which you have alerted on, to a summary index and check against the summary index to see if it is a new alert. If you do that, you could even afford to look back 15 minutes since you will have a deduping method in place.
Thank you for this but I am not sure if it will work for my set up since logs are deleted weekly.  These are windows events. Do you have a query that may identify all enabled accounts and when their... See more...
Thank you for this but I am not sure if it will work for my set up since logs are deleted weekly.  These are windows events. Do you have a query that may identify all enabled accounts and when their last login date was?
Hi    Assuming a sample of data from this example:        | makeresults count=5 | eval f1=random()%2 | eval f2=random()%2 | eval f3=random()%2 | eval f4=random()%2 | eval H=round(((random() % 1... See more...
Hi    Assuming a sample of data from this example:        | makeresults count=5 | eval f1=random()%2 | eval f2=random()%2 | eval f3=random()%2 | eval f4=random()%2 | eval H=round(((random() % 102)/(102)) * (104 - 100) + 100)       H f1 f2 f3 f4 100 1 0 0 1 100 1 1 0 1 101 1 1 0 0 102 1 1 1 0   I want to built a chart which contains the distinct count of H  for f1,f2,f3,f4 with 1  f1 f2 f3 f4 3 3 1 1   Someone can help?
This line in $SPLUNK_HOME/lib/python3.7/site-packages/splunk/clilib/cli_common.py was the source of an error when configuration initialization is slow...   if err: logger.error( 'Faile... See more...
This line in $SPLUNK_HOME/lib/python3.7/site-packages/splunk/clilib/cli_common.py was the source of an error when configuration initialization is slow...   if err: logger.error( 'Failed to decrypt value: {}, error: {}'.format(value, err)) return None return out.strip()   There is a wallclock message that gets in the middle of the decrypt operation causing it to fail. Changed code to this, and problem went away.   if 'took wallclock_ms' no in err: logger.error( 'Failed to decrypt value: {}, error: {}'.format(value, err)) return None return out.strip()  
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work q... See more...
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work quite fine and I was quickly able to see metrics in my splunk index. However, what I am completely missing are the labels of those prometheus metrics in Splunk. Here an example of some of the metrics I scrape:   # HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter. # TYPE jmx_exporter_build_info gauge jmx_exporter_build_info{version="0.20.0",name="jmx_prometheus_javaagent",} 1.0 # HELP jvm_info VM version info # TYPE jvm_info gauge jvm_info{runtime="OpenJDK Runtime Environment",vendor="AdoptOpenJDK",version="11.0.8+10",} 1.0 # HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded. # TYPE jmx_config_reload_failure_total counter jmx_config_reload_failure_total 0.0 # HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. # TYPE jvm_gc_collection_seconds summary jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 883.0 jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 133.293 jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 0.0 jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.0 # HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. # TYPE jvm_memory_pool_allocated_bytes_total counter jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 6.76448896E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Old Gen",} 1.345992784E10 jvm_memory_pool_allocated_bytes_total{pool="G1 Eden Space",} 9.062406160384E12 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.38238592E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Survivor Space",} 1.6919822336E10 jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 1.41419488E8 jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 1.141665096E9 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 3544448.0   I do see the values in Splunk, but especially for the last metric "jvm_memory_pool_allocated_bytes_total" the label of which pool is lost in splunk. Is this intentional or am I missing something. The getting started page for metrics also has no information on where those labels are stored and how I could query based on them (https://docs.splunk.com/Documentation/Splunk/latest/Metrics/GetStarted)   tia,     Jörg
Hi @Rosie2287, if you want to list the accounts used in the last 90 days that weren't used in the last 35 days, you could run something like this: I could be more detailes knowing which kind of log... See more...
Hi @Rosie2287, if you want to list the accounts used in the last 90 days that weren't used in the last 35 days, you could run something like this: I could be more detailes knowing which kind of logs yu want to monitor, are they Windows? in this case I use index=wineventlog and EventCode=4624. index=wineventlog EventCode=4624 earliest=-90d latest=now | eval period=if(_time>now()-35*86400,"Last","Previous") | stats dc(period) AS period_count values(period) AS period BY Account_name | where period_count=1 AND period="Previous" | table Account_name Ciao. Giuseppe  
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. ... See more...
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. Field1 Field2 28 28 100 99 33 56 18 18
Is there a Splunk query I can use to list when CD drive is access and written to and the users associated with those actions made
Is there a query I can add to my splunk dashboard that will list accounts inactive over 35 days?