All Topics

Top

All Topics

I have a field called "Node_ID" that I extracted from another field "issue" that is formatted as N1234. There were some events that didn't fit and couldn't extract normally so I used eval to identify... See more...
I have a field called "Node_ID" that I extracted from another field "issue" that is formatted as N1234. There were some events that didn't fit and couldn't extract normally so I used eval to identify them. | eval Node_ID=if(like(issue, "WC SVR%"), "WC SVR", Node_ID), Node_ID=if(like(issue, "EU SVR%"), "EU SVR", Node_ID), Node_ID=if(like(issue, "SE SVR%"), "SE SVR", Node_ID), Node_ID=if(like(issue, "NE SVR%"), "NE SVR", Node_ID) This does what I want and adds those values to the field Node_ID but when I try to search by one of them: | search Node_ID="WC SVR" I get zero results even though I can see there are 4 events when looking at the field in the sidebar. Is there a reason behind this? ** Any suggestions to do this another way are always appreciated**
This condition can occur when a customer replaces an indexer in the cluster with another indexer. The bug filed for this issue is SPL-235527.   Workaround: a.) restart CM to clear msg or ... See more...
This condition can occur when a customer replaces an indexer in the cluster with another indexer. The bug filed for this issue is SPL-235527.   Workaround: a.) restart CM to clear msg or b.) retain GUID of old indexer onto new indexer replacing old indexer This condition can occur when a customer replaces an indexer in the cluster with another node. The error indicates stale freeze notifications for buckets on decommissioned indexers are stuck in the frozen notification queue of the CM. The sequence of events would be the CM marks the bucket to be frozen prior to the indexer being decomm'ed. However it isn't sent out from the CM until after the indexer is decomm'ed. This causes the log message with the old GUID:  101-30-2023 19:43:07.561 +0000 ERROR CMMaster [9370 CMMasterServiceThread] - sendQueuedFrozenNotifications: GUID not found in PeerMap, guid=6B12B9AC-E5AD-49F0-98F6-DA3DA4B9BA37  This can happen when upgrading indexers without putting the cluster in maintenance mode (which would stop bucket freezing) and the indexer being adding has a different GUIDs than the one being removed. Upping the frozen_notifications_per_batch (undocumented config in server.conf/[clustering]/frozen_notifications_per_batch=25 from the default 10 may help clear out this queue faster so that this is less likely to happen. However even with this the window where freeze notifications can be generated and the indexer removed prior to it being sent would still exist for this to happen.  This message can only be cleared by a restart of the CM currently. The message is more of an annoyance than an actual issue. Since the bucket has been moved to a new indexer, it will get a freeze notification and be frozen on the new indexer. So no functionality is lost, just a log message lingering. Since the bucket will have already moved to a new indexer/GUID, it will tell the CM it owns that bucket and when the CM sees its new location it will issue another freeze notification for the new guid and the bucket will get properly frozen. The bug filed for this issue is SPL-235527. In the meantime, if a CM restart is undesirable you can use the same GUID of the old indexer and give that same GUID to the new indexer. GUID config location is at: $SPLUNK_HOME/etc/instance.cfg  [general]  guid = xxx   
We are working on upgrading our Splunk environment from 8.2.7 to 9.0.4 When we attempt to set cliVerifyServerName = true in server.conf and start splunk, the following is message just keeps being e... See more...
We are working on upgrading our Splunk environment from 8.2.7 to 9.0.4 When we attempt to set cliVerifyServerName = true in server.conf and start splunk, the following is message just keeps being echoed  in an endless loop "ERROR: certificate validation: self signed certificate." We are only using self signed certificates to secure splunkd but SplunkWeb does have a real cert signed by an recognized signing authority.    If we don't set this we see the following message on startup:   ".WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details" This feels like a bug to me but not sure since certificates are complicated.    Any one else running into this issue ?   Thanks.  
Hello Splunkers , I am trying to schedule an alert when there is no data from a particular field which is extracted field from last 30 minutes. Below is the sample event Feb 28 12:49:25 hostab... See more...
Hello Splunkers , I am trying to schedule an alert when there is no data from a particular field which is extracted field from last 30 minutes. Below is the sample event Feb 28 12:49:25 hostabc postfix/smtpd[61995]: connect from host1.abc.local[158.xx.xx] Feb 28 12:49:25 hostxyz postfix/smtpd[61995]: connect from host2.abc.local[158.xx.xx.xx] Feb 28 12:49:25 host123 postfix/smtpd[61995]: connect from host3.abc.local158.xx.xx.xxx] I am using below regex to extract sourcehost which gives me host1.abc.local,host2.abc.local,host3.abc.local | rex field=_raw ".*from (?<sourcehost>.*)"  I want to create alert when I dont see any events for the last 30 minutes from the source hosts. The alert should say "No data received from <sourcehos> in the last 30 minutes" Thanks in Advance
Hi Friends, In our project, We have 3 regions. Each region has One HF and 30+ UF.  Same region UF's servers connected with corresponding  HF.  All the 3 region UF’s connected with DS. Recently we... See more...
Hi Friends, In our project, We have 3 regions. Each region has One HF and 30+ UF.  Same region UF's servers connected with corresponding  HF.  All the 3 region UF’s connected with DS. Recently we faced below issue:  One of the region HF server went down. So that region UF's data not received into Indexer.   To overcome this issue in future how to implement load balancing in our project. I'm new to Splunk development . I want to know where to implement LB and how to implement? Please guide me how to achieve this. Thanks in advance. 
I have a search in Splunk that returns events for failed logins. I want to be able to check 30 minutes after the event for that user to see if they didn't have a successful login. I'm struggling with... See more...
I have a search in Splunk that returns events for failed logins. I want to be able to check 30 minutes after the event for that user to see if they didn't have a successful login. I'm struggling with the second part of this search.  index=logins | where AuthenticationResults="failed" | eval failedLogin=strftime(_time,"%x %r")
I am trying extract "user20" from rest of "_9a4ab75c_239_process.log".  tried multiple ways but unable to separate the user name from underscore.   /app_5/appname/appanem2/logs/agent-machine4/sre... See more...
I am trying extract "user20" from rest of "_9a4ab75c_239_process.log".  tried multiple ways but unable to separate the user name from underscore.   /app_5/appname/appanem2/logs/agent-machine4/sre/query/prot/user20_tt65rft67uu87_process.log"
Hello! I am trying to find system uptime and here's the scenario: The monitoring/status check log returns fields like - InstanceID, timestamp, Count. There's a status check event every 5mins. Syst... See more...
Hello! I am trying to find system uptime and here's the scenario: The monitoring/status check log returns fields like - InstanceID, timestamp, Count. There's a status check event every 5mins. System is up when 'Count >=5' and down otherwise.  Looking to know how I can come up with the Downtime value either in actual time as mentioned in sample data in the table or in seconds. Can someone please help with this? InstanceID timestamp SampleCount Difference Downtime insA 2023-02-21T16:00:00Z 5 0   insA 2023-02-21T16:05:00Z 4 0   insA 2023-02-21T16:10:00Z 2 00:05   insA 2023-02-21T16:15:00Z 5 00:10 00:10 insA 2023-02-21T16:20:00Z 5 0   insA 2023-02-21T16:25:00Z 5 0   insA 2023-02-21T16:30:00Z 4 0   insA 2023-02-22T01:35:00Z 2 09:05   insA 2023-02-22T01:40:00Z 5 09:10 09:10 insA 2023-02-22T01:45:00Z 5 0   insA 2023-02-22T01:50:00Z 5 0   insA 2023-02-22T01:55:00Z 5 0   insA 2023-02-22T02:00:00Z 5 0   insA 2023-02-22T02:05:00Z 5 0   insA 2023-02-22T02:10:00Z 4 0   insA 2023-02-22T02:15:00Z 3 00:05   insA 2023-02-22T02:20:00Z 5 00:10 00:10 insA 2023-02-22T02:25:00Z 5 0    
Hi We are about to upgrade to version 9.0.4 but we have noticed that version 9 colors are very bright. Is there a way to use the 8. X colors. As we have many panels and all our end users are givi... See more...
Hi We are about to upgrade to version 9.0.4 but we have noticed that version 9 colors are very bright. Is there a way to use the 8. X colors. As we have many panels and all our end users are giving out that it is extremely bright and over time. I know I can change each panel, but we have so many at this stage, I am looking for a way to do it on the system level, please. 9.0.4 0 The colors look like this The 8.1.0 is not so bright. Any help would be great Rob    
How can we retrieve the data from Splunk dashboard and display the results in Java Spring boot applications using Splunk SDK/ Splunk Rest API's.
Hi All, i am a bit puzzeled. I know that connecting to a foreign AWS account via user credentials is in itself a bit of an open issue (security wise), but it should work. And it does work for the I... See more...
Hi All, i am a bit puzzeled. I know that connecting to a foreign AWS account via user credentials is in itself a bit of an open issue (security wise), but it should work. And it does work for the Instance "Splunk Obeservability", however it does not for self hosted instance of Splunk Enterprise. So both instances use the same user and therefore same policies. I saw a post from way back that you should eliminate all namespaces not needed anyhow. So I did and am left with two namespaces/dimensions i know should fetch information => AWS/EC2, AWS/EBS.   I also have a local AMI Role for my local AWS account which works just fine, even with dimensions not used in AWS. There is however one abnormality in the logs: Input via Splunk_AWS_Role works great, but input with Cross_Splunk_Connection delivers no data, but also does not log any errors. Any help is very much appreciated. Kind regards, Mike
I have created a new entity type for ex:DB_connection and created a new saved search ( "ITSI Import Objects - Database_Connection" ) which populates new DB entities. Now the newly added entities ar... See more...
I have created a new entity type for ex:DB_connection and created a new saved search ( "ITSI Import Objects - Database_Connection" ) which populates new DB entities. Now the newly added entities are listed in Entity Management screen as a new entities but how to map these entities dynamically/automatically to the existing entity type..?   At the moment i am doing it manually by using the bulk action menu. Thanks in advance.
Hi! I'm using Splunk cloud. Trying to create alert to catch event when someone disabling alert. Need advice on the search for this alert, since has no luck with digging into `index=_* disbled_ale... See more...
Hi! I'm using Splunk cloud. Trying to create alert to catch event when someone disabling alert. Need advice on the search for this alert, since has no luck with digging into `index=_* disbled_alert_name`
I'm trying to add a "Downtime" field to my table. The timestamp on the event isn't reliable because it is when the issue was reported, not when it began so I had to extract the time from another fiel... See more...
I'm trying to add a "Downtime" field to my table. The timestamp on the event isn't reliable because it is when the issue was reported, not when it began so I had to extract the time from another field. This is a two-part question. 1. Is there a better, more simple way to get my "Downtime" variable.  rex field=issue ".+(?P<S_Time>\d{4})[Z]\s(?P<S_Date>\d{2}\s[A-Z][a-z]{2})" eval Issue_Began=S_Time. " ".S_Date." ".date_year       ```Output ex - 0654 27 Feb 2023``` eval StartTime=strftime(strptime(Issue_Began, "%H%M %d %B %Y"), "%m/%d/%Y %H:%M") eval duration=now()-strptime(StartTime, "%m/%d/%Y %H:%M") eval duration=tostring(duration,"duration") rex field=duration "((?P<D>\d{1,2})\+)?(?P<H>\d{2}):(?P<M>\d{2})" ```Output ex - 1+05:16.51``` eval Downtime=D."D ".H."H ".M."M " 2. When a system is down for less than 24 hours, the Downtime field is blank, otherwise it will give me the expected result of "1D 05H 16M". How do I alter that eval to skip "D" if it is null? I'm assuming that's the issue because the field operates properly for all other events over 1 day long. Answers to either question is greatly appreciated!
hello,   I require the ou path to be updated for my admin groups. Is there somewhere I can amend this?   Thanks Chris
Hello Everyone,  We are trying to monitor specific local paths on a remote server (Remote01) and send the data to Splunk, either in an existing index or a new one.  We have installed a Universal F... See more...
Hello Everyone,  We are trying to monitor specific local paths on a remote server (Remote01) and send the data to Splunk, either in an existing index or a new one.  We have installed a Universal Forwarder on the remote server and were able to fetch data from one folder (\\Remote01\e$\Document-DEF\Folder01) under the default index (index=main). However, we are unable to monitor a second folder (\\Remote01\e$\Document-GHI\Folder02) because the Universal Forwarder setup file only allows for one path.  We are facing the following challenges and would appreciate any guidance or advice on how to overcome them and successfully monitor both folders on the remote server in Splunk:  1.    We can't create a new index for the remote server. 2.    We can't get any information from the other folder we want to monitor ('Folder02'). 3.    We can't get information from the remote server in the existing index.  So in short, we can monitor one folder on the remote server Remote01 but unsure how to configure the forwarder to monitor a second folder on the same Remote01 server. Thanks in advance for your help!
I am new to SPLUNK.  I have an internship that is asking me to automate their health checks.  How exactly can this be done? 
Hello Splunkers,  How can we send email to multiple email addresses using Splunk alert? I saw below documentation in Splunk site, but it doesn't have any sample for multiple emails.  https://docs... See more...
Hello Splunkers,  How can we send email to multiple email addresses using Splunk alert? I saw below documentation in Splunk site, but it doesn't have any sample for multiple emails.  https://docs.splunk.com/Documentation/Splunk/9.0.4/Alert/Emailnotification Example:  If country=US, recipients will be ameri@gmail.com If country=Argentina, recipients will be ameri@gmail.com, argentina@gmail.com If country=Mexico, recipients will be ameri@gmail.com, mexico@gmail.com If no match, then ameri@gmail.com Thanks!
Hello, inside my dashboard I have a multi select input. The options in this field are determined by a query, which is working perfectly fine. I would like to hide or display certain fields if a spe... See more...
Hello, inside my dashboard I have a multi select input. The options in this field are determined by a query, which is working perfectly fine. I would like to hide or display certain fields if a specific value is inside this result set (I do know the column name but not the position). Please note that the fields should get displayed before anything is selected in the multi value field. I already have a working solution in case the value I am looking for is returned at the first position in my query using this (for the sake of readability simplified) code:       <input type="multiselect"> <label>Please Select</label> <search> <query>"a query returing a table with two column and multiple rows" </query> <done> <set token="QUERY_result">$result.column$</set> <eval token="QUERY_check">case($QUERY_result$=="theValueIamLookingFor","true")</eval> </done> </search> </input> <input type="radio" token="RadioTest" depends="$QUERY_check$"> </input>       If the value is returned on another position the solution is not working anymore. Is there a way to loop through the result, somewhat like shown in the below code snipped. Since it is planed to show or hide multiple views based on different values.     for each row in result.column do if row == "searchString1" do set token1 done if row == "searchString2" do set token2 done [...] done        If it is somehow possible without alter the query it would be perfect since it is used on other places in the dashboard and it would get messy to change it.
Hi, I have a query where I am first getting 3 fields from an index ("A", "B", "C") describing tasks to be completed and then adjoining a separate lookup containing employee names. I need to use a r... See more...
Hi, I have a query where I am first getting 3 fields from an index ("A", "B", "C") describing tasks to be completed and then adjoining a separate lookup containing employee names. I need to use a random function to randomly assign a task to random employees. So, if there are 10 tasks a 5 employees, each employees would randomly get 2 tasks OR say you have 2 tasks and 5 employees, 2 of these 5 employees would randomly get one of these 2 tasks Here is my current query: index="XYZ" | table "A", "B", "C" | appendcols [| inputlookup "DEF"] | eval rnd = random() Can you please help?