All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @quentin_young, I have a dubt of Deployer and Cluster Manager on the same system: probablky this is the reason of the error. The others can live in the same server. I usually put MC and LM in o... See more...
Hi @quentin_young, I have a dubt of Deployer and Cluster Manager on the same system: probablky this is the reason of the error. The others can live in the same server. I usually put MC and LM in one  server and Deployer and MC in another. Ciao. Giuseppe
MIME-Version: 1.0 Content-Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2 hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=black>Hi Team,</br></br>Please... See more...
MIME-Version: 1.0 Content-Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2 hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=black>Hi Team,</br></br>Please find below servers which are in maintenance mode for more than 2 hours; </br></br></font> <table border=2> <TR bgcolor=#D6EAF8><TH colspan=2>Cluster Name: AtWork-CIW-E1</TH></TR> <TR bgcolor=#D6EAF8><TH colspan=1>Service</TH><TH colspan=1>Maintenance Start Time in MST</TH></TR> <TR bgcolor=#FFB6C1><TH colspan=1>oozie</TH><TH colspan=1>Mon Oct 16 07:29:46 MST 2023</TH></TR> </table> <font size=3 color=black></br> Script Path:/amex/ansible/maintenance_mode_service</font> <font size=3 color=black></br></br>Thank you,</br>BDP Spark Support Team</font>                                                                                                                                                     Need field extractions of the following.   Cluster Name: AtWork-CIW-E1 Service Maintenance Start Time in MST oozie Mon Oct 16 07:29:46 MST 2023  
Output _time Namespace Environment ServiceDenomination MetricName EntityName Count 2023-10-06 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 1 2023-10-07... See more...
Output _time Namespace Environment ServiceDenomination MetricName EntityName Count 2023-10-06 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 1 2023-10-07 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 2 2023-10-08 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 3 2023-10-09 09:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 4 2023-10-09 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 5 2023-10-10 09:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 6 2023-10-10 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 7 2023-10-11 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 8 2023-10-11 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 9
Thank you for the quick answers. The task assigned to me has changed The customer wants to receive a separate email for each server. This made solving the problem very simple    | stats va... See more...
Thank you for the quick answers. The task assigned to me has changed The customer wants to receive a separate email for each server. This made solving the problem very simple    | stats values(*) as * by serverName   and than set the alert to trigger for each result. Thank you very much!
Hi  @ITWhisperer my expectation is, suppose everyday we have data at 22:00 we need to keep that data and ignore the rest other data. can outlier be the option to ignore the data coming with dif... See more...
Hi  @ITWhisperer my expectation is, suppose everyday we have data at 22:00 we need to keep that data and ignore the rest other data. can outlier be the option to ignore the data coming with different timestamp? please note: it is not always 22:00 data it can we any time but we have to ignore the other timestamp data apart from the usual one.   base search: | mstats sum(Entity.InMessageCount.count.Sum) as count span=1h where index=cloudwatch_metrics AND Namespace=Entity AND Environment=prod AND EntityName="Order.SupplierDepot" AND ServiceDenomination=OutboundBatcher by Namespace, Environment, ServiceDenomination, MetricName, EntityName | where count > 0  
not $SPLUNK_HOME/bin/splunk validate cluster-bundle --check-restart -auth admin command I executed ./splunk show cluster-bundle-status and caused the above errors
Failed to contact the cluster manager. ERROR: Cluster manager is not enabled on this node Failed to contact the peers endpoint. ERROR: Cluster manager is not enabled on this node I encountered th... See more...
Failed to contact the cluster manager. ERROR: Cluster manager is not enabled on this node Failed to contact the peers endpoint. ERROR: Cluster manager is not enabled on this node I encountered the above error message when executing the $SPLUNK_HOME/bin/splunk validate cluster-bundle --check-restart -auth admin:password . Has anyone had a similar problem? In addition, I put all the management functions into one node, is this the operation that caused the above error?
Hi @akthota , you could try sometthing like this: <your_search> | rex "\{\"taxGeoCode\":(?<taxGeoCode>[^,]*),\"matchCode":(?<matchCode>[^,]*),"city":(?<city>[^,]*)," | eval cond=if(taxGeoCode="true... See more...
Hi @akthota , you could try sometthing like this: <your_search> | rex "\{\"taxGeoCode\":(?<taxGeoCode>[^,]*),\"matchCode":(?<matchCode>[^,]*),"city":(?<city>[^,]*)," | eval cond=if(taxGeoCode="true" OR matchCode="true" OR city="true","true","false") | stats count(eval(taxGeoCode IN ("true","false")) AS taxGeoCode count(eval(matchCode IN ("true","false")) AS matchCode count(eval(city IN ("true","false")) AS city | by cond you could also use the spath command (better), in this case, you have to change the field names in the stats command. Ciao. Giuseppe
Not really. It doesn't tell me what data you are dealing with nor what search you are using.
Use stats values() or stats list() to group events by recipient. Then use sendresults - https://splunkbase.splunk.com/app/1794  
Hi @iswiau .. May we know, do you have only a small list of id's.. or a big list of people? if you have only a small list of ids... you can use a if condition and select the email id.  or, you can ... See more...
Hi @iswiau .. May we know, do you have only a small list of id's.. or a big list of people? if you have only a small list of ids... you can use a if condition and select the email id.  or, you can create a notepad file with the email ids and use map command like this.. ... | outputcsv TempFile.csv | stats values(Email_Address) AS emailToHeader | mvexpand emailToHeader | map search ="|inputcsv TempFile.csv | where Email_Addresss=\"$emailToHeader$\" | fields - Email_Address | sendemail sendresults=true inline=true server=\"Your.Value.Here\" from=\"Your.Value.Here\" to=\"$emailToHeader$\" subject=\"Your Subject here: \$name\$\" message=\"This report alert was generated by \$app\$ Splunk with this search string: \$search\$\"" | where comment="MakeSureNoEventsRemail" | append [|inputcsv TempFile.csv] this above one is from this page
Hello Friends, My search returns the following: serverName errorNumber responsiblePerson responsblePersonEmail server_a 4586 Bob M. bobm@tmail.com server_a 1236 Bob M. bobm@tma... See more...
Hello Friends, My search returns the following: serverName errorNumber responsiblePerson responsblePersonEmail server_a 4586 Bob M. bobm@tmail.com server_a 1236 Bob M. bobm@tmail.com server_a 788 Bob M. bobm@tmail.com server_b 468 Bob M. bobm@tmail.com server_b 8798 Bob M. bobm@tmail.com server_c 5647 Amelia S. amelias@tmail.com server_c 556 Amelia S. amelias@tmail.com server_c 789 Amelia S. amelias@tmail.com server_c 8799 Amelia S. amelias@tmail.com   I want to send alerts by email to appropriate responsible person. Each responsible person should receive ONLY ONE email that contain ALL errors on the servers for which he is responsible. In this example Bob should receive one email that contain 5 lines (3 for server_a and 2 for server_b), and Amelia should receive one email that contain 4 lines (for server_c). Any help would be appreciated!    
Path : /opt/app/splunk/bin/jars/vendors/spark/3.0.1/lib/log4j-core-2.13.3.jar   Installed version : 2.13.3
Hi Harshal, I don't know if this response is needed anymore, but from my experience, I noticed that the only capability that grants this view is the "admin_all_objects". A similar thread can be foun... See more...
Hi Harshal, I don't know if this response is needed anymore, but from my experience, I noticed that the only capability that grants this view is the "admin_all_objects". A similar thread can be found here confirming this: https://community.splunk.com/t5/All-Apps-and-Add-ons/What-roles-or-capabilities-are-needed-that-Alerts-will-display/m-p/258018
The tokens passed in the url need to be constructed from the multi-select input not hard coded
You could start with something like this index=* | fields - _time _raw | foreach * [| eval <<FIELD>>=if("<<FIELD>>"=="index",index,sourcetype)] | table * | fillnull value="N/A" | foreach * [... See more...
You could start with something like this index=* | fields - _time _raw | foreach * [| eval <<FIELD>>=if("<<FIELD>>"=="index",index,sourcetype)] | table * | fillnull value="N/A" | foreach * [eval sourcetype=if("<<FIELD>>"!="sourcetype",if('<<FIELD>>'!="N/A",mvappend(sourcetype,"<<FIELD>>"),sourcetype),sourcetype)] | dedup sourcetype | table sourcetype It may fail due to the amount of data being brought back, so you might want to break it up by index. Also, it works by looking at the fields returned in the events, so if some fields are not used in the time period covered, they will not show up, so you might want to run it a different times of the day, rather than for longer periods.
Hi @somesoni2 , I can`t really get the first search to work, how are the count calculations being performed ? x and y are not integers, so not sure how sum() is going to work ?
Here is an event log output. Its both the same log only with an other date. I see both event logs in the output in splunk but i dont want see one of them if in the search are two same event logs. Mea... See more...
Here is an event log output. Its both the same log only with an other date. I see both event logs in the output in splunk but i dont want see one of them if in the search are two same event logs. Means if i filter for 7 days and there is only one event log with CVE-2023-21554 then i want to see this because its "new" but when i filter for 30 days and then i find two equal eventlogs i dont want to see it in the output because its not new - right now i see it 16/10/2023 04:00:03.000 "175373","CVE-2023-21554","10.0","Critical","10.56.93.133","tcp","1801","Microsoft Message Queuing RCE (CVE-2023-21554, QueueJumper)","A message queuing application is affected a remote code execution vulnerability.","The Microsoft Message Queuing running on the remote host is affected by a remote code execution vulnerability. An unauthenticated remote attacker can exploit this, via a specially crafted message, to execute arbitrary code on the remote host.","Apply updates in accordance with the vendor advisory.","https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-21554 http://www.nessus.org/u?383fb650","Nessus was able to detect the issue by sending a specially crafted message to remote TCP port 1801." CVE = CVE-2023-21554 Risk = Critical extracted_Host = 192.168.0.1 sourcetype = csv 09/10/2023 04:00:03.000 "175373","CVE-2023-21554","10.0","Critical","10.56.93.133","tcp","1801","Microsoft Message Queuing RCE (CVE-2023-21554, QueueJumper)","A message queuing application is affected a remote code execution vulnerability.","The Microsoft Message Queuing running on the remote host is affected by a remote code execution vulnerability. An unauthenticated remote attacker can exploit this, via a specially crafted message, to execute arbitrary code on the remote host.","Apply updates in accordance with the vendor advisory.","https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-21554 http://www.nessus.org/u?383fb650","Nessus was able to detect the issue by sending a specially crafted message to remote TCP port 1801." CVE = CVE-2023-21554 Risk = Critical extracted_Host = 192.168.0.1 sourcetype = csv
Hi @LionSplunk, you should identify the period, using eval. so if you run the scan every day, you could try something like this: index=nessus Risk=Critical | eval period=if(_time<now()-86400,"Last... See more...
Hi @LionSplunk, you should identify the period, using eval. so if you run the scan every day, you could try something like this: index=nessus Risk=Critical | eval period=if(_time<now()-86400,"Last","Previous") | stats dc(period) AS period_count values(period) AS period BY CVE extracted_Host | where period_count=1 AND period="Last" | rename extracted_Host as Host | table CVE Host Ciao. Giuseppe  
Thank you so much for your prompt response