All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I recently started moving some of my indexes to Smart Store using AWS S3. I've noticed a lot of HTTP 204 and 404 errors in the Cache Manager when it tries to copy data from the local cache to the rem... See more...
I recently started moving some of my indexes to Smart Store using AWS S3. I've noticed a lot of HTTP 204 and 404 errors in the Cache Manager when it tries to copy data from the local cache to the remote store. Strangely enough, the data seems to all be in the remote store and I'm not having any issues searching, but the number of errors is troubling. Does anyone know if the Cache Manager just normally misses write attempts and retries them until it succeeds?
I need to find the users that are using sourcetypes in their savedsearches (reports/dashboards). I have list of sourcetypes in csv file.   SPL1:(this gives me source type list) | inputlookup sour... See more...
I need to find the users that are using sourcetypes in their savedsearches (reports/dashboards). I have list of sourcetypes in csv file.   SPL1:(this gives me source type list) | inputlookup sourcetypelist.csv     SPL2: (this gives list of savedsearches and their search string used). I see 1200 rows here. | rest /servicesNS/-/search/saved/searches | search search="*sourcetype*" | fields qualifiedSearch search title author   I need to combine the above 2 SPL's (inner join, append, sub search. I am not sure), to find only those saved seaches that are using the specfic sourcetypes (listed from SPL1, above.), in their savedsearch SPL's,     | rest /servicesNS/-/search/saved/searches | search search="*sourcetype*" | fields qualifiedSearch search title author | where match(search,"osma") As seen highlighted above match   function (osma is one of the sourcetype value) takes string/regex, but not variable. I cannot do this | where match(search, $sourcetype_variable$) I would appreciate if someone can help me here.
Hi, From my understanding, the param `defaultGroup` under the stanza `[tcpout]` in `outputs.conf` can be set to a comma-separated list based on what's defined in `[tcpout:<groupn>]` stanzas, i.e.: ... See more...
Hi, From my understanding, the param `defaultGroup` under the stanza `[tcpout]` in `outputs.conf` can be set to a comma-separated list based on what's defined in `[tcpout:<groupn>]` stanzas, i.e.: [tcpout] defaultGroup = group1, group2, group3, group4 [tcpout:group1] server=10.1.1.197:9997 [tcpout:group2] server=myhost.Splunk.com:9997 [tcpout:group3] server=myhost.Splunk.com:9997,10.1.1.197:6666 [tcpout:group4] server=foo.Splunk.com:9997 Okay. But when we define `outputs.conf` like this, the forwarder will route all traffic to the target servers in all groups defined in that param `defaultGroup`. How do we define a "backup" group where in the traffic is rerouted to the "next" group in case the default groups aren't reachable?
Hi all, We have an environment with 3 IDX in cluster. One of them (splunk_idx1) fall and when it recovers, messages like this appear in the searches: [splunk_idx2] Failed to read size=736 event(s) ... See more...
Hi all, We have an environment with 3 IDX in cluster. One of them (splunk_idx1) fall and when it recovers, messages like this appear in the searches: [splunk_idx2] Failed to read size=736 event(s) from rawdata in bucket='db_index_13_5F5D7B9C-8A97-43A6-8DEB-0CFB1D29347B' path='/opt/splunk/var/lib/splunk/db_index/db/rb_1603347108_1603313418_13_5F5D7B9C-8A97-43A6-8DEB-0CFB1D29347B. Rawdata may be corrupt, see search.log. Results may be incomplete! We have run the splunk fsck scan command, but no corrupt buckets appear on the results. But when we try to repair the bucket with the splunk rebuild comand, the following message shows: failed: Error reading rawdata: Error reading compressed journal while streaming: gzip data truncated, provider=/opt/splunk/var/lib/splunk/db_index/db/rb_1603347108_1603313418_13_5F5D7B9C-8A97-43A6-8DEB-0CFB1D29347B/rawdata/journal.gz Rebuilding bucket failed Which could be a possible solution for this issue? Thanks in advance. Best regards.
I need help coming up with an alert for DHCP broadcasts with no acknowledgement.  The DHCP is injesting logs into Splunk.
How to secure Splunk with Multi CA we are securing the Splunk Platform with ssl, data flow TOP as following, ufwd(subbranch)--->hfwd(subbranch)---->hfwd(global)--->indexer(global) The subbranrch a... See more...
How to secure Splunk with Multi CA we are securing the Splunk Platform with ssl, data flow TOP as following, ufwd(subbranch)--->hfwd(subbranch)---->hfwd(global)--->indexer(global) The subbranrch and global use different CA, we have configured "hfwd(subbranch)---->hfwd(global)" and "hfwd(global)--->indexer(global)" with certificate issued by global CA successfully But the "ufwd(subbranch)--->hfwd(subbranch)" needs to secure with certificate issued by subbranch CA, we need to configure two CA on hfwd(subbranch), How to configure the CA path on hfwd(subbranch)?
Hi,I do have 100+ servers where splunk forwarders' version is older one and needs to upgrade . I don't have access to these servers. Without effecting configuration,how do I upgrade those by remotely ?
I am looking for a way to list the counts by customer (for example, including 0 activity) for the past hour, among all customers so far that has had activity since the start of the day. Example: Joh... See more...
I am looking for a way to list the counts by customer (for example, including 0 activity) for the past hour, among all customers so far that has had activity since the start of the day. Example: John (15), Dave (10) and Maria (8) so far for the day. Within the past hour: Dave (3). The result I am looking for is something like this: John (0), Dave (3), Maria (0). I have looked at map, joins and subsearches, but nothing  so far works. I need to list the 0 activity as well since they have been active for the day, just not in the last hour. Any ideas?
 We use outlook categories while categorizing alerts received in several outlook folder of specific mailbox, as we perform monitoring of applications like exchange, Skype, sharepoint. Categories are ... See more...
 We use outlook categories while categorizing alerts received in several outlook folder of specific mailbox, as we perform monitoring of applications like exchange, Skype, sharepoint. Categories are defined by us. Is there a way we can connect that outlook data of that particular mailbox and get those categories for individual alerts in splunk? Other data like sender, recipient, subject, time can be by performing message tracking in exchange on particular mailbox from splunk Only difficulty is how to get the categories assigned by individual team member through splunk
I am trying to send logs through UF to my Stand alone instance but data is not getting forwarded. I have UF installed in one of my test server and added inputs.conf,outputs.conf and set deployment.c... See more...
I am trying to send logs through UF to my Stand alone instance but data is not getting forwarded. I have UF installed in one of my test server and added inputs.conf,outputs.conf and set deployment.conf then restarted my splunk service in test server.In my stand alone instance i have created index. Outputs.conf (opt/app/splunk/splunk/etc/system/local) [tcpout] defaultGroup=group1  [tcpout:group1] server=mysplunkhost.com:9997 inputs.conf (opt/app/splunk/splunk/etc/system/local) [monitor:///folder/upload/cen*] index = test_index sourcetype = cenere disabled=false Should there be any configuration setup in my standalone instance?I dont see serverclass defined in my standalone instance . Any other configurations needs to be added? Thank you
hi   I use the search below   | inputlookup lookup_xx where TYPE="Ind" | search DOMAIN=I OR DOMAIN=B OR DOMAIN=W | rename HOSTNAME as host | table host TYPE DOMAIN   Instead using | search, ... See more...
hi   I use the search below   | inputlookup lookup_xx where TYPE="Ind" | search DOMAIN=I OR DOMAIN=B OR DOMAIN=W | rename HOSTNAME as host | table host TYPE DOMAIN   Instead using | search, I would like to include this in my Where condition but it doesnt works   | inputlookup lookup_xx where TYPE="Ind" AND (DOMAIN=I OR DOMAIN=B OR DOMAIN=W) | rename HOSTNAME as host | table host TYPE DOMAIN   how to do this please? and for performances is it better to use Where or search?   [| inputlookup lookup_xx | search TYPE="Ind" AND (DOMAIN=I DOMAIN=B OR DOMAIN=W) | rename HOSTNAME as host ] `w`     And why I can do   [| inputlookup lookup_xx | where TYPE="Ind" OR (DOMAIN=I OR DOMAIN=B OR DOMAIN=W) But not [| inputlookup lookup_xx | where TYPE="Ind" AND (DOMAIN=I OR DOMAIN=B OR DOMAIN=W) Thanks for your help
Hello   I use token filters in a table panel of my dashboard in order to filter the results of the search and it works perfectly when the search is directly filled in the table panel But I need to... See more...
Hello   I use token filters in a table panel of my dashboard in order to filter the results of the search and it works perfectly when the search is directly filled in the table panel But I need to use a scheduled search for this monitoring If I keep the filters in the search, the search doesn't works.... So I put the filters after the loadjob command like below : Is it correct or not?     <row> <panel> <title>Reboot &amp; logon</title> <input type="text" token="tok_filterhost" searchWhenChanged="true"> <label>Hostname</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_filtermodel" searchWhenChanged="true"> <label>Model.</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_filterbuilding" searchWhenChanged="true"> <label>Building.</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_reboot" searchWhenChanged="true"> <label>Days without reboot</label> <default>=*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_logon" searchWhenChanged="true"> <label>Days without logon</label> <default>=*</default> <initialValue>*</initialValue> </input> <table> <title>TUTU</title> <search> <query>| loadjob savedsearch="admin:TOTO_sh:TITI" | search Site=$tok_filtersite|s$ | search Responsible=$tok_filterresponsible$ | search Department=$tok_filterdepartment$ | search "Days without logon"$tok_logon$ | search "Days without reboot"$tok_reboot$ | search Hostname=$tok_filterhost$ | search Model=$tok_filtermodel$ | search Building=$tok_filterbuilding$</query>  For more information, here is the stats command done on "TITI" search : | stats last(BUILDING_CODE) as Building, last(DESCRIPTION_MODEL) as Model, last(LastReboot) as "Last reboot date" last(NbDaysReboot) as "Days without reboot" last(LastLogon) as "Last logon date" last(NbDaysLogon) as "Days without logon" by host SITE RESPONSIBLE_USER DEPARTMENT | rename host as Hostname, SITE as Site, RESPONSIBLE_USER as Responsible, DEPARTMENT as Department | sort -"Days without reboot" -"Days without logon" Thanks for your help please
Hi splunkers,   I need to get socks logs from my MWG into my splunk instances. Does anyone get these logs in ?   Thanks.   Olivier
How can I combine these 3 queries given everything before pipe is same: query1: index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*|timechart span=... See more...
How can I combine these 3 queries given everything before pipe is same: query1: index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*|timechart span=1s count query2: index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*|timechart span=5m eval(count()) as "Response Code" by response_code query3: index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*| timechart span=5m avg(response_time) as "Avg Response Time" p99(response_time) as "99 Percentile" p95(response_time) as "95 Percentile"
hello, i got stuck with this, can some one help me with the solution 
Could you please help understand the DEBUG option for CacheManager to instigate eviction?    
I have stream source data which goes over the Nexus xx then over the (Gigamon) network then to a stream (Linux)forwarder.  However sensitive data are not being masked.  Please advise what would need ... See more...
I have stream source data which goes over the Nexus xx then over the (Gigamon) network then to a stream (Linux)forwarder.  However sensitive data are not being masked.  Please advise what would need to done to mask such data. 
Hey guys,  I'm trying to add new threat feeds via ES Threat Intel Download. One of the feeds requires API token authentication. I haven't been able to successfully find a way add an api key to the t... See more...
Hey guys,  I'm trying to add new threat feeds via ES Threat Intel Download. One of the feeds requires API token authentication. I haven't been able to successfully find a way add an api key to the threat feed creation via ui.  There does not seem to be a way to add headers to the GET request.  Is there a config file that sits on the search heads that can be adjusted via cli to include request headers, which will contain the api key or is there another solution to be able to query threat feeds that require authentication  via the Enterprise Security web ui?
Hoping someone can help me to join data in the same index across multiple events. Here is the event data index event_type job_name item_name queue_time jenkins_statistics queue null x... See more...
Hoping someone can help me to join data in the same index across multiple events. Here is the event data index event_type job_name item_name queue_time jenkins_statistics queue null xxx/job/3 20 jenkins_statistics queue null xxx/job/3 30 jenkins_statistics queue null xxx/job 0.03 jenkins_statistics job xxx/job/3 null 0.03 jenkins_statistics queue null xxx/job/2 22 jenkins_statistics queue null xxx/job 0.01 jenkins_statistics job xxx/job/2 null 0.01 jenkins_statistics queue null xxx/job/1 25 jenkins_statistics queue null xxx/job/1 15 jenkins_statistics queue null xxx/job 0.19 jenkins_statistics job xxx/job/1 null 0.19   The result I am looking for is index job_name count(queue_time) avg(queue_time) jenkins_statistics xxx/job/3 2 25 jenkins_statistics xxx/job/2 1 22 jenkins_statistics xxx/job/1 2 20   I want to grab each of the events associated with event_type=job and join the job_name field with the item_name field for the events with event_type=queue and get the associated queue_time values for these event_type=queue events and calculate the average per job_name while dropping all other event_type=queue events. I have been trying to get stats to work for this, but have not been able to figure it out. Everything I try includes the queue_time values for the event_type=queue events. I have not been able to effectively join the events across the event_type=job and event_type=queue events 
I have an issue where querying Oracle 9 DB using DB connect. All fields that are varchar returned as empty, but for numeric or date it is returning correcting.    Say if I run on SQL explorer "Sele... See more...
I have an issue where querying Oracle 9 DB using DB connect. All fields that are varchar returned as empty, but for numeric or date it is returning correcting.    Say if I run on SQL explorer "Select sysdate from dual" or "select count(*) from table_A" I will get the expected result, but running "select transactionid from table_a;" it will return multiple rows  of empty data.    And if I run "select sysdate, transactionid from table_a;" I will get sysdate correctly but transactionid field as empty.    I try to added as input, but when I check the events all the varchar fields does not exists in the events.  Using "select sysdate, transactionid from table_a;", I can see sysdate in the event but not transactionid.    if I run the query in SQL developer it is all working perfectly fine. For other DB input I am using, I don't have this issue. I have trying add the query as input, cast the varchar fields but nothing seems to work. Just wonder if anyone has any ideas how could I work around this?    Below is the information of DB and Splunk: DB is Oracle - Version: 9.2.0.8.0 Splunk Version: 7.2.5 Current Application: Splunk DB Connect App Version 3.1.3