All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @richgalloway  Unfortunately the csv file has 1156 entries so the use case would be huge.  The other issue is that the csv file gets updated daily and urls are added and removed.  I was hoping to... See more...
Hi @richgalloway  Unfortunately the csv file has 1156 entries so the use case would be huge.  The other issue is that the csv file gets updated daily and urls are added and removed.  I was hoping to use the csv file to say get only this part out of the result urls.  Alternatively some way of joining the results url with the csv file urls. Regards, Joe
Hi @mm7 , you could extract the status field as a permanent field so you don't need to extract in the search or use eval(searchmatch) but this is the faster way. Ciao. Giuseppe
Hi @DANITO115 ... may i know what command you ran, what was the output.. copy paste the full command output from your solaris please
Hi @NOORULAINE , as @PickleRick said, it's possible to delete events (not the entire index) but it's only a logic deletion, not physical only, enabling the can_delete role for the user and it's a ve... See more...
Hi @NOORULAINE , as @PickleRick said, it's possible to delete events (not the entire index) but it's only a logic deletion, not physical only, enabling the can_delete role for the user and it's a very dangerous feature that usually is disabled also for administrators. The only phisical detetions are the splunk clean eventdata command, but it deletes the entire index and when a bucket rolls from cold at the end of the retention time. Ciao. Giuseppe
wow this is so much cleaner and faster! did not think to regex out the status string thank you!
index=sample(Consumer="prod") ServiceName="product.services.prd.*" | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS 'fourxxErrors', count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) ... See more...
index=sample(Consumer="prod") ServiceName="product.services.prd.*" | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS 'fourxxErrors', count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) AS 'fivexxErrors', count AS 'TotalRequests' | eval 'fourxxPercentage' = if('TotalRequests' > 0, ('fourxxErrors' / 'TotalRequests') * 100, 0), 'fivexxPercentage' = if('TotalRequests' > 0, ('fivexxErrors' / 'TotalRequests') * 100, 0) | table "fourxxPercentage", "fivexxPercentage" The result is showing as 0 for both fields inside the table "fourxxPercentage", "fivexxPercentage". Actually, fourxxErrors and fivexxErrors count are greater than 0. Is that because it's not showing the decimal values?
Hi @mm7 , please try something like this: <your_search> | rex "^\[(?<status>[^\]]*)" | stats dc(status) AS status_count values(status) AS status BY task id | where status_count=1 AND status="sent" ... See more...
Hi @mm7 , please try something like this: <your_search> | rex "^\[(?<status>[^\]]*)" | stats dc(status) AS status_count values(status) AS status BY task id | where status_count=1 AND status="sent" | table task id status ciao. Giuseppe
See if this helps.  It replaces the my_url field with a string depending on which regular expression matches the data. index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule... See more...
See if this helps.  It replaces the my_url field with a string depending on which regular expression matches the data. index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | eval my_url=case(match(my_url,".*/microsoft\.com$"), "microsoft.com", match(my_url,".*/office\.com$"), "office.com", 1==1, my_url) | stats count by my_url | table my_url
| eval my_url=mvjoin(mvindex(split(url,"."),-2,-1),".")
figured it out, changed "appendcols" to "append" and added this to the end   | stats count(id) AS "count" by id | where count==1   there is probably a better way, open to take other answers, than... See more...
figured it out, changed "appendcols" to "append" and added this to the end   | stats count(id) AS "count" by id | where count==1   there is probably a better way, open to take other answers, thanks! EDIT: accepted a way better solution
Try -30m@m -30m on its own is probably a negative number so it is rounded to the nearest non-negative i.e. zero which in epoch time is the beginning of 1970.
I'm working with data from this search index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | s... See more...
I'm working with data from this search index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | stats count by my_url | table my_url The events look like this 02ef65dc96524dabba54a950da7cb0d8.fp.measure.office.com/ 0434c399ca884247875a286a10c969f4.fp.measure.office.com/ 14f8c4d0e9b7be86933be5d3c9fb91d7.fp.measure.office.com/ 3d8e055913534ff7b3c23101fd1f3ca6.fp.measure.office.com/ 4334ede7832f44c5badcfd5a6459a1a2.fp.measure.office.com/ 5d44dec60c9b4788fb26426c1e151f46.fp.measure.office.com/ 5f021e1b8d3646398fab8ce59f8a6bbd.fp.measure.office.com/ 6f6c23c1671f72c36d6179fdeabd1f56.fp.measure.office.com/ 7106ea87c1e2ed0aebc9baca86f9af34.fp.measure.office.com/ 88c88084fe454cbc8629332c6422e8a4.fp.measure.office.com/ 982db5012df7494a88c242d426e07be6.fp.measure.office.com/ a478076af2deaf28abcbe5ceb8bdb648.fp.measure.office.com/ aad.cs.dds.microsoft.com/ In the my_list_of_urls.csv there are these entries *.microsoft.com/ microsoft.com/ *.office.com/ office.com/ What I'm trying to do is get the microsoft.com and office.com from the results instead of the full url.  I'm stumped on how to do it.  Any help is appreciated. TIA, Joe
Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. ... See more...
Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. Thanks in advance Robbie  
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so ... See more...
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so in the above example it should just be "456" my current query looks something like this   "abc" AND "sent" | table id | rename id as "sent_ids" | appendcols [ search "abc" AND "success" | table id | rename id as "success_ids" ]     this gets me a table with the 2 columns, and I'm stuck on how to "left exclusive join" the two columns to get the unique ids. or maybe I'm approaching this entirely wrong and there is a much easier solution?
Looking at it as every 3 hours from 7am to 10pm gets you * 7-21/3 * * *.  Change the first asterisk to a number (0-59) if you want the schedule to be once each hour rather than every minute.
The blacklist setting supports neither of those.  See my earlier reply for the list of supported keywords/fields.
Hi @PickleRick @gcusello    I was reading this reply and I am currently in need to set up this from your post. ============== Another relatively well working idea is to use Windows Event Forwardi... See more...
Hi @PickleRick @gcusello    I was reading this reply and I am currently in need to set up this from your post. ============== Another relatively well working idea is to use Windows Event Forwarding (easy to set up in a domain environment, can be also confuigured in domainless setup but then it gets complicated but still possible) and pull windows from a central eventlog collector. I use it quite a lot. Be aware of possible performance issues as you scale horizontally too far. =========== Do you have any  guide/link which tells this step by step; how to setup WEF on two servers.
How to write a cron schedule for every 3 hours exclude from 10pm to 7 am 
There was some maintenance activity was there
Hi @richgalloway , Need a clarification on blacklisting the field which one we need to put under blacklist is it newprocessname or parentprocessname ?? Thanks