arrowecssupport's Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

arrowecssupport's Posts

The data looks like this (sorry I've had to obscure the exact data) 1.1 vendor X4010 (mahyts4) 1.2 vendor X4010 (Failed) 1.3 vendor X4017 (dokdok4) The 1st REX looks for the part number (X4... See more...
The data looks like this (sorry I've had to obscure the exact data) 1.1 vendor X4010 (mahyts4) 1.2 vendor X4010 (Failed) 1.3 vendor X4017 (dokdok4) The 1st REX looks for the part number (X4010) where there is a "Failed" part. The 2nd REX looks for a list of all Part numbers (X4010 & X4017) So the problem happens when i'm trying to run a complete list of Part numbers, but the 1st rex always populates my search as it's happening in the background.
Hi, I have two different field extractions that i need to use. The 1st one is used all the time for my system and I've used a REX to extract this automatically. However I've got another REX wh... See more...
Hi, I have two different field extractions that i need to use. The 1st one is used all the time for my system and I've used a REX to extract this automatically. However I've got another REX which is similar but slightly different and when i try to use this inline with a search the results get messed up due to the 1st one running in the background. Is there a way of telling this specifically search not to perform the field extraction for the 1st one? Thanks
We are using Splunk to alert when we see specific events in our logs. There are hundreds of different log events we might get, and a few that need to be alerted on. We have created an event type ... See more...
We are using Splunk to alert when we see specific events in our logs. There are hundreds of different log events we might get, and a few that need to be alerted on. We have created an event type so we can make our searches quicker, but even the event type configuration is very very large. The search looks something like this index = weblogs Logsfile = “error1” OR Logfile = “error2” OR Logfile = “error3” OR Logfile = “error4” OR Logfile = “error5” OR Logfile = “error6” OR Logfile = “error7” And on and on The list ends up around 60-70 different OR statements and the list is growing all the time. What is the best way to reduce the size of this massive search?
We monitor the log output of many file storage systems, some devices have only a few, others have hundreds, but there is no way of knowing how many disks each log file will contain. The issue (in t... See more...
We monitor the log output of many file storage systems, some devices have only a few, others have hundreds, but there is no way of knowing how many disks each log file will contain. The issue (in the real world) is that the customer has 2 non compatible drives; the 750gb HDD part code HRF750 . We want to be able to extract on the full line 750gb HDD partnumber: HRF750 s/n: 31564847877 from the log where ever we find the part code HRF750 . We can then put this in a table or report, allowing us to find systems running on compatible hardware. How do I go about doing this? Below is an example of what a log file looks like. Array model: RX-100 250gb SSD partnumber: XFA250 s/n: 12313123123 250gb SSD partnumber: XFA250 s/n: 56498787521 250gb SSD partnumber: XFA250 s/n: 95195195198 250gb SSD partnumber: XFA250 s/n: 51515151511 250gb SSD partnumber: XFA250 s/n: 95959595959 750gb HDD partnumber: HRF750 s/n: 31564847877 750gb HDD partnumber: HRF750 s/n: 89765432145
Hi thanks for the response. I fixed it an posted my own answer below.
It appeared that in my dedup i was using a "tag" that the user group didn't have permissions to view.
Why when i used dedup, in a search for a user account does it return no results. Where the exactly same search for an admin account works. It works for the users until i add the dedup into it. Ver... See more...
Why when i used dedup, in a search for a user account does it return no results. Where the exactly same search for an admin account works. It works for the users until i add the dedup into it. Very odd.
Thanks that appears to work and might work in some situations. Problem is want to be able to create a search into an Event type and this isn't possible when you use a pipe.
We have data which can display a computer's serial number. The data is a little odd and we have to extract the serial number using 1 rex and 1 is done automatically. This creates 2 fields serialnumbe... See more...
We have data which can display a computer's serial number. The data is a little odd and we have to extract the serial number using 1 rex and 1 is done automatically. This creates 2 fields serialnumber1 & serialnumber2 . I've tried to create an alias called serialnumber, but I've run into problems. Applies to sourcetype = imap Field aliases > serialnumber1 =serialnumber Field aliases > serialnumber2 =serialnumber The problem is that when both fields are populated, serialnumber is populated 1. When serialnumber1 is NULL and serialnumber2 is populated, "serialnumber" is populated 2. When serialnumber1 is populated and serialnumber2 is NULL,"serialnumber" IS NOT POPULATED 3. When Both serialnumber1 & serialnumber2 are populated, "serialnumber" is populated Why is this case that when search result 2. above is true the aliases fail.
What is the best training material you've seen to help you move towards a clustered environment?
In the end it appeared that the splunk server was skipping triggering as apparently there is a limit to 1 real time alert per CPU core. We increased this and it mostly fixed the issues.
I've added the earliest and latest settings under "edit trigger conditions" which i think does the same as putting it in the search page, yes/no? Regarding the need to throttling of the alerts, we... See more...
I've added the earliest and latest settings under "edit trigger conditions" which i think does the same as putting it in the search page, yes/no? Regarding the need to throttling of the alerts, we've not had issues with real time alerts, we know the data/events that come in quite well and it shouldn't be a problem in this situation.
We've been using real time alerts to send us an email whenever a specific log/event is hit. However we only have 4 CPU cores and can only run 4 real time alerts. What is the best configuration to... See more...
We've been using real time alerts to send us an email whenever a specific log/event is hit. However we only have 4 CPU cores and can only run 4 real time alerts. What is the best configuration to set up a scheduled alert to run every minutes so we get 1 email every time a new log is triggered? I'm getting stuck because it's sending lots of emails each time an alert is triggered. My criteria is 1 new log 1 email sent out.
Sorry it didn't resolve my issue. Thank you for you time on this.
(?-s)(?^.(Failed).$) This was the final REX that gave me exactly what i wanted.
Thanks for this, yes it shows that it ran 1441 times on that day. Meaning it ran every minuet in the day so all working well.. Also if i run the search that the alert is built on the event shows u... See more...
Thanks for this, yes it shows that it ran 1441 times on that day. Meaning it ran every minuet in the day so all working well.. Also if i run the search that the alert is built on the event shows up so i know the criteria was met. This morning i sent a test alert no email restarted services sent another test alert and it worked.
I should say i tested it and it was working 11.55pm on friday. Then nothing for the rest of the weekend
I create a new lookup table, added the new fields to search and also to the email alert that went out.
We have had a problem over the weekend when one of our alerts did not trigger. I had to restart the services to get it all working again. Does anyone had any idea why this might have happened? It... See more...
We have had a problem over the weekend when one of our alerts did not trigger. I had to restart the services to get it all working again. Does anyone had any idea why this might have happened? It's possible it was related to changes we had made. It's the second time in a week we've needed to restart the services to have changes start working.