All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have more than 100 UF deployed and wan to know the date and time of each of the forwarders to be shown in real time basis on a dashboards. How I can read the clock data of a UF on a real time basis?
Hi, I want to check when the index was created, modified or deleted from internal logs(also other details of this particular operation). Is there any way to query this data?
how might i incorporate regex into a like eval element in a search like this. This syntax does not work | eval product=case((signature LIKE "%Cipher%") OR (signature LIKE "%SMBv2 signing%") OR (s... See more...
how might i incorporate regex into a like eval element in a search like this. This syntax does not work | eval product=case((signature LIKE "%Cipher%") OR (signature LIKE "%SMBv2 signing%") OR (signature LIKE "%Diffie-Hellman%") OR (signature LIKE "%Weak Cryptographic%") OR (signature LIKE "%SHA-%") OR (signature LIKE "%SWEET32%") OR (signature LIKE "%TLS/SSL%") OR (signature LIKE "%Certificate Is Invalid%") OR (signature LIKE "%protocol%"), "Cipher/Protocol/Cert", signature LIKE "%Java%", "Java", signature LIKE regex="[M][S][0-9][0-9][-][0-9][0-9][0-9]", "test", signature LIKE "%Apache%", "Apache", signature LIKE "%Apple%", "Apple", signature LIKE "%Cisco%", "Cisco", | search product=test | dedup signature | table signature product
hi I would like to display the results of a timechart without doing another table search is it possible please?? thanks
Hi Team. I am attempting to use the app (great by the way) with an Australian Map Tile Provider and I have verified that the Z parameter is off by 1. E.G. where the "normal" fetch is looking fo... See more...
Hi Team. I am attempting to use the app (great by the way) with an Australian Map Tile Provider and I have verified that the Z parameter is off by 1. E.G. where the "normal" fetch is looking for a zoom of 18, the map provider uses a z=19. Is there a method that i can use the override URL but add 1 to any z value Thanks.
`myquery` | table Site Device Interface metric_name * returns values like this : Site Device Interface metric_name full_metric_name values _time Ams-P xyz123 vni-0/1.0 vni-0/1_0 v... See more...
`myquery` | table Site Device Interface metric_name * returns values like this : Site Device Interface metric_name full_metric_name values _time Ams-P xyz123 vni-0/1.0 vni-0/1_0 vni-0/1_0_in_usage 0.72 2020-03-02 Ams-P xyz123 vni-0/1.0 vni-0/1_0 vni-0/1_0_out_usage 1.61 2020-03-02 Ams-S xyz678 vni-0/1.0 vni-0/1_0 vni-0/1_0_in_usage 0.62 2020-03-02 Ams-S xyz678 vni-0/1.0 vni-0/1_0 vni-0/1_0_out_usage 1.20 2020-03-02 Now i want to device the in_usage and out_usage into two different columns and show the output like below : Site Device Interface in_usage out_usage _time Ams-P xyz123 vni-0/1.0 0.72 1.61 2020-03-02 Ams-S xyz678 vni-0/1.0 0.62 1.20 2020-03-02
Just got my license this past week and I've been having a blast setting up. Amazing program. Anyway, I'm running into an Index Cluster Question. I've got a separate Master Node and three indexers ... See more...
Just got my license this past week and I've been having a blast setting up. Amazing program. Anyway, I'm running into an Index Cluster Question. I've got a separate Master Node and three indexers all in the same cluster. When I go to Master Node, I see all the Peers, Indexes and Search Heads. Under the Indexes tab, it only shows the _ internal indexes and main. Well, this doesn't seem right because on each of the indexers I've installed the same indexes.conf file. When I do a search from the search head with index = oswin, I do see data. Why then on the indexes page on the master node I do not see that index listed? Does that mean that index is NOT having its buckets replicated? I hope this makes sense, and apologies if it does not... only been at this for about a week. Thanks!
Hi Splukers, I have a requirement to search for some filenames and display the missing files as per the date. Thus, i made up a query to look like index=123 host=htrstef87 "string_1," "created"... See more...
Hi Splukers, I have a requirement to search for some filenames and display the missing files as per the date. Thus, i made up a query to look like index=123 host=htrstef87 "string_1," "created" NOT client_ip="192.168.17.5" "String_examp_*" "xml.7z.pgp" | eval keyword=case(searchmatch("string_1L_2"),"string_1L_2",searchmatch("string_1L_21"),"string_1L_21",searchmatch("string_1L_22"),"string_1L_22",searchmatch("string_1L_23"),"string_1L_23",searchmatch("string_1L_24"),"string_1L_24") | eval Filestatus=if(like(keyword, "string_1L%"), "fileFound", "Filenotfound") |eval DateReport= date_month."-".date_year| stats values(keyword), values(FileName), values(Filesize) by DateReport | where Filesize>0 This displays all the filenames with all the data. But the requirement is to match the keyword and check them every month at certain date and send them if any files are missing or no bytes (filesize). Any help is much appreciated. note: I am running splunk 6.5.3 and thus queries like where(in) does not work for me. Thanks, Amit
Anyone have experience with ingesting Nessus scan data into Splunk with the new Tenable app/add-on ? if yes, please share what "application" type did you choose when configuring the add-on on the ... See more...
Anyone have experience with ingesting Nessus scan data into Splunk with the new Tenable app/add-on ? if yes, please share what "application" type did you choose when configuring the add-on on the configuration page ? Currently it gives three options - Tenable.sc Credentials - Tenable.sc Certificates - Tenable.io Which one do I choose for Nessus ?
I setup 3 DB inputs as part of our requirement to ingest DB logs. Problem is we're encountering duplicate events upon ingesting data to splunk. I can observe the duplicate events when there's no avai... See more...
I setup 3 DB inputs as part of our requirement to ingest DB logs. Problem is we're encountering duplicate events upon ingesting data to splunk. I can observe the duplicate events when there's no available data to be fetched from DB, Splunk DB connect up still use the last checkpoint value and keeps re-ingesting the data until it fetch new data in DB. Here's sample internal logs for how checkpoint value generates in 1 of the DB setup. Any idea where the issue and how to fix it. $SPLUNK_HOME/var/lib/splunk/modinputs/server/splunk_app_db_connect {"value":"2020-03-02 00:42:39.307","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T11:45:00.019+11:00"} {"value":"2020-03-02 00:47:17.17","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T11:50:00.084+11:00"} {"value":"2020-03-02 00:58:00.783","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T12:05:00.164+11:00"} {"value":"2020-03-02 00:58:00.783","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T12:15:00.018+11:00"} {"value":"2020-03-02 00:58:00.783","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T12:20:00.624+11:00"} {"value":"2020-03-02 00:58:00.783","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T12:25:00.017+11:00"} {"value":"2020-03-02 00:58:00.783","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T12:35:00.423+11:00"} {"value":"2020-03-02 01:37:59.38","appVersion":"3.1.4","columnType":93,"timestamp":"2020-03-02T12:50:00.108+11:00"} Thanks.
I want to ingest a very large file that has no usable timestamps. I want to set: SHOULD_LINEMERGE = false DATETIME_CONFIG = CURRENT The problem is that the thousands of rows get the same time... See more...
I want to ingest a very large file that has no usable timestamps. I want to set: SHOULD_LINEMERGE = false DATETIME_CONFIG = CURRENT The problem is that the thousands of rows get the same timestamp down to the millisecond. This makes searching extremely slow, because all the records are clumped together on one indexer. Is there a way to force Splunk to break up the file and assign slightly varying timestamps on ingestion?
in my event i want to extract TLD's i want to extract: com news tech net org please help me with rex? thanks in advance
The Add-on's props.conf has a REPORT statement that calls, among others, sysmon-dns-record-data and sysmon-dns-ip-data. But there are no stanzas by these names in the Add-on's transforms.conf Ther... See more...
The Add-on's props.conf has a REPORT statement that calls, among others, sysmon-dns-record-data and sysmon-dns-ip-data. But there are no stanzas by these names in the Add-on's transforms.conf There are however [extract_dns_record_data] and [extract_dns_ip_data]. I'm not sure if it's just a case of the names needing to be aligned.
Smartstore doesn't appear to be respecting our disk usage limits via the DiskUsage & CacheMan stanzas (minFreeSpace & max_cache_size respectively). Is there way to purge smartstore data that is using... See more...
Smartstore doesn't appear to be respecting our disk usage limits via the DiskUsage & CacheMan stanzas (minFreeSpace & max_cache_size respectively). Is there way to purge smartstore data that is using this excessive space? i've attached a diag from one of our peers & our cluster master.
Background: I attempted to import a CSV file into a KV Store using v3.3.3 of the Lookup Editor app. One of the fields in the CSV uses 0 to represent false and 1 to represent true. I set the corresp... See more...
Background: I attempted to import a CSV file into a KV Store using v3.3.3 of the Lookup Editor app. One of the fields in the CSV uses 0 to represent false and 1 to represent true. I set the corresponding field as Boolean when I created the KV Store but when I tried to import the data the Boolean field showed "#bad-value". I found that in order to get the CSV to be successfully imported into the KV Store, I had to pre-process the script (using sed) to replace the 1's with "true" (sans quotes) and the 0's with false (again, sans quotes). Various settings within the Splunk config files use 0 to represent false and 1 to represent true. Would it be possible to have Lookup Editor do the same when importing in to a field that has been typed as "Boolean"?
I need to hide "Settings" in splunk bar on a given App and for a given role. Any suggestions? TIA.
I need to restrict the user to not to see indexes which are created in a specific App from Search App. Any suggestion how to achieve this? TIA.
A user with role which does not have read to "Search" app could not access "Account Settings" to change his password. Any idea how to get "Account Settings" without Search app grants? TIA.
In the splunk UI on the left hand side after the query search you can find the fields and the top 10 values, (their percentage and count) for all the fields. I would like to use this programaticall... See more...
In the splunk UI on the left hand side after the query search you can find the fields and the top 10 values, (their percentage and count) for all the fields. I would like to use this programatically, Is there any way I can get it using splunk sdk. Or any query that would give the same result. Thanks in advance.
Hi, splunkers: My customer want to monitoring the following 2 things: 1. The status of logs collection. Thats means they wan to ensure that all logs were indexed to splunk. 2. The status of sp... See more...
Hi, splunkers: My customer want to monitoring the following 2 things: 1. The status of logs collection. Thats means they wan to ensure that all logs were indexed to splunk. 2. The status of splunk. Send the splunk web message (like the message in the image) to their centralized monitoring platform them in real time if there are any warn or error occured because they almost don't care about splunk monitoring console. Any idea for these?