All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Forum, I have written a script that pull off the receive power from optical transceivers on every hour.   All is well with this except, as the values are a measurement in loss, they are negative ... See more...
Hi Forum, I have written a script that pull off the receive power from optical transceivers on every hour.   All is well with this except, as the values are a measurement in loss, they are negative values in bells. I would really like to represent this with the single value radial - I can get it to work with a perfectly with a marker gauge but having that "rev counter" type representation would not only be so cool bit so useful to get power readings at  glance on our long range transmission kit, its such a perfect representation I think for this kind of measurement, and would really appeal to that more "scientific" engineering type of audience. When I use the single value radial I cannot for the life of me work out where I can adjust the scale (ideally -40dBm to 0dBm.  I just expected this to be like managing any other sort of float (I am working with a decimal number, not a string or anything), just to happens to be a negative value. Am I just missing something really silly?  Any help would be gratefully received - I'm using dashboard studio if that makes a difference.   Thank you
@_JP Hi, the query which you shared me works same like the one which I shared. The percentage values are showing 0. Is it because we need to add decimal values after 0. I tried adding    (fourxxErr... See more...
@_JP Hi, the query which you shared me works same like the one which I shared. The percentage values are showing 0. Is it because we need to add decimal values after 0. I tried adding    (fourxxErrors / TotalRequests) * 100, 2)   instead of  (fourxxErrors / TotalRequests) * 100, 0) But no use. Do you have any other idea?  
I think the issue is more complicated than that. I understand not to look for internal.   that is not the issue. the issue is that splunk generates different data from the orginal source with diffe... See more...
I think the issue is more complicated than that. I understand not to look for internal.   that is not the issue. the issue is that splunk generates different data from the orginal source with different test date. which is NOT in the file.  it has to do with the cluster environment. anyone super expert in such?
Remove all the ticks/quotes from the field names in your SPL - for what you are doing they aren't necessary.  I only quote my field names if they have any negative space characters.  Once I did that ... See more...
Remove all the ticks/quotes from the field names in your SPL - for what you are doing they aren't necessary.  I only quote my field names if they have any negative space characters.  Once I did that I was able to "fake" your search in my environment and get results:     index=_internal status=* | rename status AS HTTPStatus | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS fourxxErrors, count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) AS fivexxErrors, count AS TotalRequests | eval fourxxPercentage = if(TotalRequests > 0, (fourxxErrors / TotalRequests) * 100, 0), fivexxPercentage = if(TotalRequests > 0, (fivexxErrors / TotalRequests) * 100, 0) | table fourxxPercentage, fivexxPercentage     Also, I just added this at the beginning to turn misc data in _internal into events that "look" like yours for this example purpose: index=_internal status=* | rename status AS HTTPStatus  
I am not seeing attachments or other screenshots.   Usually when I see duplicate events like this it has been because a file was replicated somehow "underneath" Splunk within a directory where Spl... See more...
I am not seeing attachments or other screenshots.   Usually when I see duplicate events like this it has been because a file was replicated somehow "underneath" Splunk within a directory where Splunk thinks it is a new file and starts indexing it again. Or, I've seen this happen if you have the log files going to a shared mount point and two different Forwarders are pointing at the same files.  A few questions to help you troubleshoot: - You mention splunk-server.  What does the splunk_server field, along with values of things like host, sourcetype, and source look like for these events?  - Your timestamps are wildly off, and not necessarily in a predictable way (e.g. just by 1 hour).  Does your log data have timestamps within it, or are you relying on the timestamp being derived from when Splunk "sees" your log? - Have you poked around in the _internal index to see where Splunk "saw" any files matching the following: /var/log/acobjson/*100223*rtm*  NOTE:  Don't look for source=/var/log/acobjson/*100223*rtm* in index=_internal, because the source= in this context refers to the Splunk log files that were indexed.  You can start without specifying a field, but you can also try something like index=_internal series=/var/log/acobjson/*100223*rtm* since that is one field Splunk will log this info in as it is monitoring files.
Hi @ITWhisperer , If the urls are consistent, that is a great idea.  Unfortunately, the urls have between 1 and 8 parts between . and I don't know where to start.  Alternatively, if there is a way o... See more...
Hi @ITWhisperer , If the urls are consistent, that is a great idea.  Unfortunately, the urls have between 1 and 8 parts between . and I don't know where to start.  Alternatively, if there is a way of adding what is in the csv to the results, that would work. Regards, Joe
Hi @richgalloway  Unfortunately the csv file has 1156 entries so the use case would be huge.  The other issue is that the csv file gets updated daily and urls are added and removed.  I was hoping to... See more...
Hi @richgalloway  Unfortunately the csv file has 1156 entries so the use case would be huge.  The other issue is that the csv file gets updated daily and urls are added and removed.  I was hoping to use the csv file to say get only this part out of the result urls.  Alternatively some way of joining the results url with the csv file urls. Regards, Joe
Hi @mm7 , you could extract the status field as a permanent field so you don't need to extract in the search or use eval(searchmatch) but this is the faster way. Ciao. Giuseppe
Hi @DANITO115 ... may i know what command you ran, what was the output.. copy paste the full command output from your solaris please
Hi @NOORULAINE , as @PickleRick said, it's possible to delete events (not the entire index) but it's only a logic deletion, not physical only, enabling the can_delete role for the user and it's a ve... See more...
Hi @NOORULAINE , as @PickleRick said, it's possible to delete events (not the entire index) but it's only a logic deletion, not physical only, enabling the can_delete role for the user and it's a very dangerous feature that usually is disabled also for administrators. The only phisical detetions are the splunk clean eventdata command, but it deletes the entire index and when a bucket rolls from cold at the end of the retention time. Ciao. Giuseppe
wow this is so much cleaner and faster! did not think to regex out the status string thank you!
index=sample(Consumer="prod") ServiceName="product.services.prd.*" | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS 'fourxxErrors', count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) ... See more...
index=sample(Consumer="prod") ServiceName="product.services.prd.*" | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS 'fourxxErrors', count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) AS 'fivexxErrors', count AS 'TotalRequests' | eval 'fourxxPercentage' = if('TotalRequests' > 0, ('fourxxErrors' / 'TotalRequests') * 100, 0), 'fivexxPercentage' = if('TotalRequests' > 0, ('fivexxErrors' / 'TotalRequests') * 100, 0) | table "fourxxPercentage", "fivexxPercentage" The result is showing as 0 for both fields inside the table "fourxxPercentage", "fivexxPercentage". Actually, fourxxErrors and fivexxErrors count are greater than 0. Is that because it's not showing the decimal values?
Hi @mm7 , please try something like this: <your_search> | rex "^\[(?<status>[^\]]*)" | stats dc(status) AS status_count values(status) AS status BY task id | where status_count=1 AND status="sent" ... See more...
Hi @mm7 , please try something like this: <your_search> | rex "^\[(?<status>[^\]]*)" | stats dc(status) AS status_count values(status) AS status BY task id | where status_count=1 AND status="sent" | table task id status ciao. Giuseppe
See if this helps.  It replaces the my_url field with a string depending on which regular expression matches the data. index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule... See more...
See if this helps.  It replaces the my_url field with a string depending on which regular expression matches the data. index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | eval my_url=case(match(my_url,".*/microsoft\.com$"), "microsoft.com", match(my_url,".*/office\.com$"), "office.com", 1==1, my_url) | stats count by my_url | table my_url
| eval my_url=mvjoin(mvindex(split(url,"."),-2,-1),".")
figured it out, changed "appendcols" to "append" and added this to the end   | stats count(id) AS "count" by id | where count==1   there is probably a better way, open to take other answers, than... See more...
figured it out, changed "appendcols" to "append" and added this to the end   | stats count(id) AS "count" by id | where count==1   there is probably a better way, open to take other answers, thanks! EDIT: accepted a way better solution
Try -30m@m -30m on its own is probably a negative number so it is rounded to the nearest non-negative i.e. zero which in epoch time is the beginning of 1970.
I'm working with data from this search index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | s... See more...
I'm working with data from this search index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | stats count by my_url | table my_url The events look like this 02ef65dc96524dabba54a950da7cb0d8.fp.measure.office.com/ 0434c399ca884247875a286a10c969f4.fp.measure.office.com/ 14f8c4d0e9b7be86933be5d3c9fb91d7.fp.measure.office.com/ 3d8e055913534ff7b3c23101fd1f3ca6.fp.measure.office.com/ 4334ede7832f44c5badcfd5a6459a1a2.fp.measure.office.com/ 5d44dec60c9b4788fb26426c1e151f46.fp.measure.office.com/ 5f021e1b8d3646398fab8ce59f8a6bbd.fp.measure.office.com/ 6f6c23c1671f72c36d6179fdeabd1f56.fp.measure.office.com/ 7106ea87c1e2ed0aebc9baca86f9af34.fp.measure.office.com/ 88c88084fe454cbc8629332c6422e8a4.fp.measure.office.com/ 982db5012df7494a88c242d426e07be6.fp.measure.office.com/ a478076af2deaf28abcbe5ceb8bdb648.fp.measure.office.com/ aad.cs.dds.microsoft.com/ In the my_list_of_urls.csv there are these entries *.microsoft.com/ microsoft.com/ *.office.com/ office.com/ What I'm trying to do is get the microsoft.com and office.com from the results instead of the full url.  I'm stumped on how to do it.  Any help is appreciated. TIA, Joe
Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. ... See more...
Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. Thanks in advance Robbie  
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so ... See more...
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so in the above example it should just be "456" my current query looks something like this   "abc" AND "sent" | table id | rename id as "sent_ids" | appendcols [ search "abc" AND "success" | table id | rename id as "success_ids" ]     this gets me a table with the 2 columns, and I'm stuck on how to "left exclusive join" the two columns to get the unique ids. or maybe I'm approaching this entirely wrong and there is a much easier solution?