All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all Is there a way to set up a multi-domain certificate and a wildcard certificate? if yes then can anyone tell me the step by step procedure to implement this ?
Because alert queries normally look back, say the last 15 minutes to the current time, we need to have our jobs start at say 12:15pm thru midnight. For now our cron schedule is like this: */15 12-2... See more...
Because alert queries normally look back, say the last 15 minutes to the current time, we need to have our jobs start at say 12:15pm thru midnight. For now our cron schedule is like this: */15 12-23 * * *, which of course runs from 12pm to 23:45. We see an issue where at 12pm, it may produce a false positive; at midnight (the next day) the alert will not run, and thus we may miss an important alert. We want it to run from 12:15pm thru 00:00 (next day), because of the 'look back' to the previous 15 minutes. It may be very simple, but so far I'm at a loss. What is the correct way of doing this?
I have been using tstats to get event counts by day per sourcetype, but when I search for events in some of the identified sourcetypes search returns no results. I am a Splunk admin and have access t... See more...
I have been using tstats to get event counts by day per sourcetype, but when I search for events in some of the identified sourcetypes search returns no results. I am a Splunk admin and have access to All Indexes. Here are the searches I have run:   | tstats count where index=myindex groupby sourcetype,_time   One of the sourcetype returned was novell_groupwise (which was quite a surprise to me), but when I search   index=myindex sourcetype=novell_groupwise   on a day that tstats indicated there was events on, nothing is returned. Can anyone explain this discrepancy?
Can anyone help why this Warning message is coming in Splunkd log
Hi All, I hope someone can enlighten me with this seemingly simple problem. I have this very simple search return 32 rows and showing that all events have a transaction_type value. If I click ... See more...
Hi All, I hope someone can enlighten me with this seemingly simple problem. I have this very simple search return 32 rows and showing that all events have a transaction_type value. If I click on the D highlighted above I would expect it to show me just the 20 D rows but instead I get: Very weird. If I change the search to    index=orafin sourcetype=ORAFIN2 NOT transaction_type!=D   Then I get what I want: Can someone please explain what is happening? Thanks, Keith  
Hi, Team I want to use tokens for email and xMater notification. I have one field named Server. So this is what I write for message for xMatter alerting: Data isn't refreshed in time on $result.Ser... See more...
Hi, Team I want to use tokens for email and xMater notification. I have one field named Server. So this is what I write for message for xMatter alerting: Data isn't refreshed in time on $result.Server$ But here's what I received: Data isnt refreshed in time on genesys-pulse-tko-04.hk.hsbc genesys-pulse-tko-04.hk.hsbc The name of server shows twice on the message.  Another case is I use token for email notification: here's what I write on splunk: The alert condition for $result.Server$ was triggered. here's what I receive when the alert is triggered: Anyone knows the reason of these cases?
Hi, I am trying to write a query that would get me the average TPS and average response time for services in the same table. I tried this -   <search> | eval <evaluate response time as RT> | bi... See more...
Hi, I am trying to write a query that would get me the average TPS and average response time for services in the same table. I tried this -   <search> | eval <evaluate response time as RT> | bin _time AS "TIME" span=1s | eventstats count as TPS by TIME, service | stats count AS txnCount, avg(TPS) as avgTPS, avg(RT) as avgRT by service   However, the numbers don't seem to match when I am running the TPS query individually like this -   <search> | bin _time AS "TIME" span=1s | eventstats count as TPS by TIME, service | stats count AS txnCount, avg(TPS) as avgTPS by service   Any suggestions what I could be doing wrong here? Thank you!
When using HF to collect logs on the cloud, Because the add-on used cannot set host, So the host of the data is the name of HF, but it needs to reflect that the data comes from an impassable envir... See more...
When using HF to collect logs on the cloud, Because the add-on used cannot set host, So the host of the data is the name of HF, but it needs to reflect that the data comes from an impassable environment, And the same data type uses the same sourcetype. At present, the way I use is First, use different sourcetypes to access data  At this time, they have the same host (HF name) then, I use props and transforms to modify their host and Change their sourcetype to the same one the question is modify host and change sourcetype  Only one will take effect. Is there a way to modify the host first and then modify the sourcetype? Or something better ?
Hello Expert, Please help me arrive on a regex to extract a xml node in a xml field. I have a field value like below <Reponse status="failure">  <messages>         <message id="Payload">      ... See more...
Hello Expert, Please help me arrive on a regex to extract a xml node in a xml field. I have a field value like below <Reponse status="failure">  <messages>         <message id="Payload">             <UpdateAccountRq>                 <AccountId>123465</AccountId>                 <NewStatus>Active</NewStatus>             </UpdateAccountRq>         </message>     </messages> </Reponse>   And I want to extract the below xml node and display it in a separate field.    <UpdateAccountRq>         <AccountId>123465</AccountId>        <NewStatus>Active</NewStatus> </UpdateAccountRq>   I tried many ways, but nothing works.   Attempt 1:  rex field=Action "messages>(?<Payload>.+)<\/messages" | table Action, Payload Attempt 2:  rex field=Action "\<message id=\"Payload\">(?<Payload>[^<\/message]+)" | table Action, Payload   Please help. Thanks
_time device1_avg device2_avg device3_avg device4_avg 2022-04-07 00:00 34 3 11 22 2022-04-07 01:00 21 76 41 87 2022-04-07 02:00 2 18 32 32 2022-04-07 03:00 12 3 36 ... See more...
_time device1_avg device2_avg device3_avg device4_avg 2022-04-07 00:00 34 3 11 22 2022-04-07 01:00 21 76 41 87 2022-04-07 02:00 2 18 32 32 2022-04-07 03:00 12 3 36 54 2022-04-07 04:00 7 8 21 43 2022-04-07 05:00 11 3 17 21 2022-04-07 06:00 19 12 19 16 2022-04-07 07:00 15 10 12 19 2022-04-07 08:00 4 2 19 6   I have a table of averages for an arbitrary number of arbitrary devices as shown above. How do I use these averages as thresholds for alerts about these devices? I'm trying to have a search that runs every 15 minutes to check which devices have exceeded these averages. For example, if a search is run at 06:45, and returns that device1 has a count of 10, device2 has a count of 15, device3 has a count of 21, and device 4 has a count of 2, send an alert that says device2 and device3 have exceeded their averages listed for the 06:00 hour (i.e., 12 and 19, respectively).
Hi All, I am doing a very simple search over All Time of:        index=index=orafin sourcetype=ORAFIN2       It returns 26 rows and, as this shows, all have a transaction_type value... See more...
Hi All, I am doing a very simple search over All Time of:        index=index=orafin sourcetype=ORAFIN2       It returns 26 rows and, as this shows, all have a transaction_type value: If I then select D it adds that to the search but retuns NO rows:   Oddly if I change the search to a double negative  I get my data: Whats going on? Hoping to be enlightened, Keith
Here's the text string from the log I'm searching: store license for Store 123456 2022-04-07 19:17:44,360 ERROR path not found   Here's my splunk search: index=* host="storelog*" "store lice... See more...
Here's the text string from the log I'm searching: store license for Store 123456 2022-04-07 19:17:44,360 ERROR path not found   Here's my splunk search: index=* host="storelog*" "store license for " |rex field=_raw "Store\s123456\n\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\,\d{3}\s(?P<errortext>.*)path" | stats count by errortext   Why am I getting the following when I search? No results found.
Hey all ,  just need a little regex help trying to pull an IP address out  and its not working. here is my rex  | rex field=_raw "Remote host:(?<Remotehost>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" ... See more...
Hey all ,  just need a little regex help trying to pull an IP address out  and its not working. here is my rex  | rex field=_raw "Remote host:(?<Remotehost>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" here is an example event  4/7/22 3:11:32.000 PM   04/07/2022 03:11:32 PM LogName=Security EventCode=4779 EventType=0 ComputerName=BPSQCP00S080.rightnetworks.com SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=115076290 Keywords=Audit Success TaskCategory=Other Logon/Logoff Events OpCode=Info Message=A session was disconnected from a Window Station. Subject: Account Name: 705628 Account Domain: RIGHTNETWORKS Logon ID: 0x13887BFB Session: Session Name: RDP-Tcp#81 Additional Information: Client Name: DESKTOP-PIT40LB Client Address: 73.175.205.64
I'm trying to make a visualization showing our number of signatures, but the data is not very organized because I have 20+ results with variations of the name generic, like for example: Generic.T... See more...
I'm trying to make a visualization showing our number of signatures, but the data is not very organized because I have 20+ results with variations of the name generic, like for example: Generic.TC.ldrvmp 1 Generic.TC.ligldq 1 Generic.TC.ljhook 1 Generic.TC.lmzdbq 1 Generic.TC.lnionm 1 Generic.TC.lniqpu 1 Generic.TC.lxboaq 1 Generic.TC.mpneia 1 Generic.TC.mpngod I want to group all these results under the name "generic", but it seems like if I try to use wild cards in the below search it gives me an error. I could do write out each signature individually in the |eval command but that seems very inefficient. I was wondering if it was possible for me to group the results in to the same name? | eval signature=case(signature="Generic.*", "generic")  |stats count by signature | sort -count
Hi, I have documents similar to the one below:     request_id: 12345 revision: 123 other_field: stuff my_precious: { [-] 1648665400.774453: { [-] keys: [ [-] key:key... See more...
Hi, I have documents similar to the one below:     request_id: 12345 revision: 123 other_field: stuff my_precious: { [-] 1648665400.774453: { [-] keys: [ [-] key:key1, size: 329 ] op: operation_1 } 1648665400.7817056: { [-] keys: [ [-] key:key2, size: 785 ] op: operation_2 } 1648665400.7847242: { [-] keys: [ [-] key:key4, size: 632 ] op: operation_1 } 1648665400.7886434: { [-] keys: [ [-] key:key5, size: 1938 ] op: operation_3 } 1648665400.7932374: { [-] keys: [ [-] key:key3, size: 23 ] op: operation_2 }     I currently have a query to get the frequency of a certain key, but how can I display the "size" information along side with it? My query right now is:     rex (?<keys>"(?<=key:).*?(?=,)") |stats count by keys | sort -count | head 10     which displays the keys with the highest number of count, but it doesn't show the key's associated "size". Can't quite figure this out...any help is appreciated!
Is it possible to get all app insights data using data manager in splunk cloud Victoria experience? 
Hey Community,  I am trying to get my head around this query  My subsearch below, The query will look for the api path,src and Ip's and I am doing dns lookup to get hostname which is present in dif... See more...
Hey Community,  I am trying to get my head around this query  My subsearch below, The query will look for the api path,src and Ip's and I am doing dns lookup to get hostname which is present in different index        site = "friendly" index=traffic_log src="*" uri="*" | eval date = date_month + "/" + date_mday + "/" + date_wday + "/" + date_year | mvexpand date | dedup src | dedup uri | lookup dnslookup clientip as src OUTPUT clienthost as ComputerName | where like (ComputerName,"p%") | dedup ComputerName |table ComputerName,src,uri,date       Main query. If see my main query Computername is the only filed which is present in main index search and want to use for searching with computername. which will give the owner details of the hostname but also I want the src,uri,date fileds from subsearch to be added in table      index="wineventlog" source="WinEventLog:Application" [ search site = "friendly.org" index=traffic_log src="*" uri="*" | eval date = date_month + "/" + date_mday + "/" + date_wday + "/" + date_year | mvexpand date | dedup src | dedup uri | lookup dnslookup clientip as src OUTPUT clienthost as ComputerName | where like (ComputerName,"p%") | dedup ComputerName |fields ComputerName,src,uri,date] | dedup ComputerName| dedup ownerEmail | dedup ownerFull | dedup ownerName | dedup ownerDept | table ComputerName, ownerEmail,ownerFull,ownerName,ownerDept,src,uri,date       Can someone throw insights into the query
Hello,  Many thanks in advance for taking the time to read/consider my question, it's always appreciated! I'm currently working on reducing the overhead of our existing Windows UF by adding to o... See more...
Hello,  Many thanks in advance for taking the time to read/consider my question, it's always appreciated! I'm currently working on reducing the overhead of our existing Windows UF by adding to our blacklist in a way that effectively blacklists all login events for Windows where the Process Name is "-", since these are often extremely voluminous and often don't directly correlate with actual user logins (feel free to correct me if I'm wrong here). These are also indicated when the "Process ID" is "0x0", which is also shown by the blacklists I've attempted below: The blacklists that I have tried to no avail are as follows:   blacklist = EventCode="4624" Message="(?:Process Name:).+(?:C:\\Windows\\System32\\services.exe)|.+(?:C:\\Windows\\System32\\winlogon.exe)|.+(?:C:\\Windows\\CCM\\CcmExec.exe)|.+(?:[-]\sNetwork)" blacklist = EventCode="4624" Message="Process\sID:\s0x0" blacklist = EventCode="4624" Message="(?:Process\sName:\s[-])"   Please let me know if I'm missing anything with any of those blacklists above, but so far that I've tested none of those blacklists actually eliminate the "Process Name" of "-" being sent to Splunk, which increases our license ingestion while providing essentially no new information or value.  Thanks in advance, any & all answers will be rewarded with karma!  Charlie  
Not able to get rid of EDT timezone using strftime command 2022-04-07 07:00:11.028-EDT . Any suggestions
Greetings, A network engineer recently modified the firewalls for the ports and we are getting this error messages in our search head, Cluster Manager, and Indexers:  Auto Load Balanced TCP Output ... See more...
Greetings, A network engineer recently modified the firewalls for the ports and we are getting this error messages in our search head, Cluster Manager, and Indexers:  Auto Load Balanced TCP Output Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct.