All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Here's the situation - we have a non-developer, new to Splunk, without access to Hadoop (or any basic understanding of it) trying to backup indexed data to AWS S3. The documentation provides a lot o... See more...
Here's the situation - we have a non-developer, new to Splunk, without access to Hadoop (or any basic understanding of it) trying to backup indexed data to AWS S3. The documentation provides a lot of detail on how indexed data is stored but it doesn't give any definitive details on how to backup the data. There's a number of references to Hadoop using the Data Roll or Hunk, but we're not using Hadoop at all. What would be the simplest way to 1) do daily incremental backups of the warm buckets to S3 and 2) archive frozen buckets to Glacier so no data is lost?
We had this severe issue last week - What can be done when the parsing and aggregation queues are filled up? Since it took us days to figure it out and the entire indexer cluster was compromised a... See more...
We had this severe issue last week - What can be done when the parsing and aggregation queues are filled up? Since it took us days to figure it out and the entire indexer cluster was compromised and it took 11 hours with Support on the line to detect it, I wonder whether in general an heavy forwarder layer is a good idea.
Hello, I’m working on a powershell inputs and am stuck in regards to extracting the timestamp. An event is stdout from my script as follows: 2020-02-05T14:11:36.000000-05:00 actinguser_user... See more...
Hello, I’m working on a powershell inputs and am stuck in regards to extracting the timestamp. An event is stdout from my script as follows: 2020-02-05T14:11:36.000000-05:00 actinguser_userid="WJ" affecteduser_userid="DG" affecteduser_name="G,D" actiondescription="Password reset by administrator. " I am using the following props: [this:adminevents] SHOULD_LINEMERGE = false CHECK_FOR_HEADER = false #KV_MODE = auto TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N #TIME_PREFIX = Timestamp\s*:\s TZ = -05:00 Is it possible to extract the timezone directly by parsing the timestamp? This is my first run through of an extraction, so I apologize if it's simple. Also, how do I debug extraction? Is there a way to enable debugging so that a specific sourcetype's extraction steps are logged to _internal ? Thanks, Matt
Hello, All In Splunk Enterprise 8.0.1, I searched "index=_internal | table _raw" and Visualization with Table. I'd like to make Table rows over 100. But How to I can't find. Is there a wa... See more...
Hello, All In Splunk Enterprise 8.0.1, I searched "index=_internal | table _raw" and Visualization with Table. I'd like to make Table rows over 100. But How to I can't find. Is there a way to remove this limitation and visualize more than 100 rows?
I have data that looks like Jan-19 and I want to sort by it. Except I can't, because strptime("Jan-19","%b-%y") does not work, even though e.g. strptime("Jan-1-19","%b-%d-%y") does. How do I... See more...
I have data that looks like Jan-19 and I want to sort by it. Except I can't, because strptime("Jan-19","%b-%y") does not work, even though e.g. strptime("Jan-1-19","%b-%d-%y") does. How do I workaround this (presumably extremely common but not documented or fixed) bug?
I am trying to create a histogram plot, but I want to make the x-axis labels more readable. How do I go about doing this? Here is what I am doing: my search | bin field span 0.5 | chart count by... See more...
I am trying to create a histogram plot, but I want to make the x-axis labels more readable. How do I go about doing this? Here is what I am doing: my search | bin field span 0.5 | chart count by field Here is an example of the x-axis when create the chart. Is there a way to force the x-axis to show single values for each bin (at the bin center)? Or even better, can I force the x-axis to place integer labels at their respective positions relative to my bins?
There's something I'm just not getting today... I've got a chart command that generates results from a series of searches, evals, and other processes. The net result is a nice little chart with re... See more...
There's something I'm just not getting today... I've got a chart command that generates results from a series of searches, evals, and other processes. The net result is a nice little chart with results that looks like this: Location 2019 2020 Delta Main 980 1268 29.39 % The 2019 and 2020 are indeed years. My issue is that Delta is calculated based on those 2 columns as eval Delta=(('2020'-'2019')/'2019'*100) This is fine for this year, but of course it means we'd have to edit this dashboard again next year. How do I reference the relative column positions rather than the column names, or otherwise glean the column names from the dynamic data, in order to crunch the Delta value automagically?
good afternoon    I have the following question, there are currently roles in our cluster that have the following restriction srchMaxTime = 3600, but it is validated that certain users are searchi... See more...
good afternoon    I have the following question, there are currently roles in our cluster that have the following restriction srchMaxTime = 3600, but it is validated that certain users are searching for more than 1 hour and I ask if this is due to the cability "admin all object". any help is appreciated Cheers
I am trying to extract the below file into single log, but it got breaks into two or more files in splunk Sample file : PING 20.152.32.XXX (20.152.32.XXX) 56(84) bytes of data. 64 bytes from ... See more...
I am trying to extract the below file into single log, but it got breaks into two or more files in splunk Sample file : PING 20.152.32.XXX (20.152.32.XXX) 56(84) bytes of data. 64 bytes from 20.152.32.XXX: icmp_seq=1 ttl=248 time=67.9 ms 64 bytes from 20.152.32.XXX: icmp_seq=2 ttl=248 time=68.2 ms 64 bytes from 20.152.32.XXX: icmp_seq=3 ttl=248 time=68.1 ms 64 bytes from 20.152.32.XXX: icmp_seq=4 ttl=248 time=68.2 ms --- 20.152.32.XXX ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 67.926/68.153/68.276/0.134 ms What need to changed in the props.conf [lala_pop] BREAK_ONLY_BEFORE = PING\s+\d+.\d+.\d+.\d+ NO_BINARY_CHECK = true SHOULD_LINEMERGE = true Appreciate you help. Thanks
I've looked through a lot of the posts about date timestamp extraction and I think I'm decent enough at it but for the life of me I can't figure out what is going on with my logs for Crashplan. I fo... See more...
I've looked through a lot of the posts about date timestamp extraction and I think I'm decent enough at it but for the life of me I can't figure out what is going on with my logs for Crashplan. I found a post with a working example of crashplan service log props and mine matched almost exactly but still no go. props.conf [crashplan_service] TIME_PREFIX = ^\[ MAX_TIMESTAMP_LOOKAHEAD = 21 TIME_FORMAT = %m.%d.%y %H:%M:%S.%3N SHOULD_LINEMERGE = false NO_BINARY_CHECK = true inputs.conf [monitor:///opt/crashplan/log/service.log.0] source = crashplan sourcetype = crashplan_service index = crashplan disabled = false Here is one event where the day/month look to be swapped: With this event I have no idea how it's getting the day/month:
Does anyone know if there is a way to integrate Microsoft Azure Sentinel with Splunk? I'm specifically looking for events of interest/alerts/indicators from Sentinel into Splunk. It appears t... See more...
Does anyone know if there is a way to integrate Microsoft Azure Sentinel with Splunk? I'm specifically looking for events of interest/alerts/indicators from Sentinel into Splunk. It appears that the Microsoft Azure Add-on for Splunk provides access to many aspects of Azure including Security Center but I don't see anything specifically for Sentinel. Presumably Sentinel would take these various feeds and apply the Microsoft secret sauce to them to provide insight. Rather than having to reverse-engineer or build new in Splunk it would be good if there was a way to integrate the curated information from Sentinel into Splunk. I can't seem to find any information on a Sentinel API. There are data connectors to get data into Sentinel but I can't seem to find anything on getting data out. Thanks.
We have 3 search heads and they are in cluster.We are observing scheduled reports with zero values for few reports.zero value reports are generating from search head 3.Issue is not consistent. we h... See more...
We have 3 search heads and they are in cluster.We are observing scheduled reports with zero values for few reports.zero value reports are generating from search head 3.Issue is not consistent. we have one main search which will run in every 15 mins and 20 sub reports which use the main search via loadjob. these reports run every 15 mins ie 3 mins after the main search.so now few reports are delivered recepents with zero values from the search head 3. i was seeing scheduler log usually runtime is 0.3 sec for a successful reports but for failed reports it run time showing 300sec. Can anyone please help me to understand how i need to trouble shoot this?
Hi I have a list of all the ID RUNNING per dashboard (But if someone else is running the same dashboard i get those ID's as well, how can i reduce it down? ) I run this SPL from the dashboard i w... See more...
Hi I have a list of all the ID RUNNING per dashboard (But if someone else is running the same dashboard i get those ID's as well, how can i reduce it down? ) I run this SPL from the dashboard i want to reduce it down to. In this case the Dashboard is kpi_monitoring_robbie. | rest /services/search/jobs | search dispatchState="RUNNING" AND provenance="*kpi_monitoring_robbie*" | fields id provenance dispatchState OUTPUT id provenance dispatchState https://127.0.0.1:8089/services/search/jobs/admin__admin__Murex__search60_1581445194.220651 UI:Dashboard:kpi_monitoring_robbie RUNNING https://127.0.0.1:8089/services/search/jobs/admin__admin__Murex__search61_1581445194.220652 UI:Dashboard:kpi_monitoring_robbie RUNNING https://127.0.0.1:8089/services/search/jobs/admin__admin__Murex__search63_1581445194.220654 UI:Dashboard:kpi_monitoring_robbie RUNNING But one of the above was from the second dashboard. I cant do it per user as a lot of user have the same user name, if i was using LDAP i could. Thanks in Advance Robert
Say I have an index A which has all the IPs logged during the day. So every event has an IP and the timestamp it was seen. What I need to find is the count of the occurrence of each IP for the fir... See more...
Say I have an index A which has all the IPs logged during the day. So every event has an IP and the timestamp it was seen. What I need to find is the count of the occurrence of each IP for the first 15 mins starting from the timestamp of the first occurrence of the IP. Example: Say I find IP 1.2.3.4 at 10:00, 10:05,10:12, 10:16,10:20 and IP 9.8.7.6 at 11:00, 11:05, 11:10, 11:20. For IP 1.2.3.4 the first occurrence was at 10:00 . So in the first 15 mins which is from 10:00 till 10:15 I get the occurrence count as 3. Occurrence at 10:16 and 10:20 is ignored. Similarly for IP 9.8.7.6 the first occurrence was at 11:00 , so the first 15 mins i.e from 11:00 to 11:15 the occurrence count is 3. 11:20 occurrence is ignored. So basically I want a search query which will give me the count of occurrence of each IP for the first 15 mins starting from the first occurrence of each IP. The search result here would be 1.2.3.4 3 9.8.7.6 3
I've been plugging away at this for a few days and I'm stuck =0( Above is a lookup csv (insert dummy data) I have from Nessus. I am trying to use Splunk to create totals of vulnerability ... See more...
I've been plugging away at this for a few days and I'm stuck =0( Above is a lookup csv (insert dummy data) I have from Nessus. I am trying to use Splunk to create totals of vulnerability severity levels in two separate tables, one by organization and another by system. Below is what I want to do, any ideas how to do this? Lastly, I’m trying to use the newly created tables and make two time graphs on vulnerability severity level totals by organization/date and another graph by system/date. Scans are run everyday, so inevitability the totals will change over time, which is what I'm trying to capture with the time-charts. Any ideas? Thanks!
Hi, I am trying to set up inputs on TA-Tenable add on and it fails with error "Argument validation for scheme=tenable_securitycenter: script running failed (killed by signal 9: Killed).". I insta... See more...
Hi, I am trying to set up inputs on TA-Tenable add on and it fails with error "Argument validation for scheme=tenable_securitycenter: script running failed (killed by signal 9: Killed).". I installed "Tenable add-on for Splunk" version 3.1.0 on one of our heavy forwarder. Anyone have any suggestions what could be wrong here?
Hi there! I am trying to make an alert that tells me when a particular dashboard panel returns >0. Does anybody know how to reference a particular dashboard panel in the alert? Furthermore, then ho... See more...
Hi there! I am trying to make an alert that tells me when a particular dashboard panel returns >0. Does anybody know how to reference a particular dashboard panel in the alert? Furthermore, then how to reference the return number of that dashboard panel?
We have several searches that we run and have a manual backend process to load that data to each endpoint (100+ endpoints). I want to be able to schedule this custom search command to run daily and b... See more...
We have several searches that we run and have a manual backend process to load that data to each endpoint (100+ endpoints). I want to be able to schedule this custom search command to run daily and be able to have an editable list of 100+ endpoints to pass in to the search. Is this possible to do within Splunk?
Hi all, I am sending data from intermediate forwarder to indexer and during indexing, I would like to send raw "uncooked data" to 3rd party application. Recently I tried to use CEF app index and... See more...
Hi all, I am sending data from intermediate forwarder to indexer and during indexing, I would like to send raw "uncooked data" to 3rd party application. Recently I tried to use CEF app index and forward but , it is working but it is becoming cooked data. Is there any way to handle this from indexer level? Thanks
I am trying to use the REST API Modular Input app, but I am getting this error: ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" Exception performing request: I... See more...
I am trying to use the REST API Modular Input app, but I am getting this error: ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" Exception performing request: Invalid header name 'X-APIKeys: accessKey' X-APIKeys: accessKey=blah;secretKey=blah Ideas on how to fix?