All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have multiple docker containers and there are some logs (created by our application, same log gets updated) inside those containers. We want to monitor those logs every 5 mins using Splunk UF whi... See more...
We have multiple docker containers and there are some logs (created by our application, same log gets updated) inside those containers. We want to monitor those logs every 5 mins using Splunk UF which is outside the container. Splunk UF will send data to Splunk indexer in another server. Can you please tell me options to do this.  
Hi @yuanliu , really appreciate your help and patience here. My requirements had changed and this is my current search query index=abc sourcetype = example_sourcetype | transaction startswit... See more...
Hi @yuanliu , really appreciate your help and patience here. My requirements had changed and this is my current search query index=abc sourcetype = example_sourcetype | transaction startswith="Saved messages to DB" endswith="Done bulk saving messages" keepevicted=t | eval no_msg_wait_time = mvcount(noMessageHandleCounter) * 1000 | fillnull no_msg_wait_time | rename duration as processing_time | eval _raw = mvindex(split(_raw, " "), -1) | rex "Done Bulk saving .+ used (?<db_bulk_write_time>\w+)" | eval processing_time = processing_time * 1000 | eval mq_read_time = processing_time - db_bulk_write_time - no_msg_wait_time | where db_bulk_write_time > 0 | rename processing_time as "processing_time(ms)", db_bulk_write_time as "db_bulk_write_time(ms)", no_msg_wait_time as "no_msg_wait_time(ms)", mq_read_time as "mq_read_time(ms)" | table _time, processing_time(ms), db_bulk_write_time(ms), no_msg_wait_time(ms), mq_read_time(ms), Count, _raw So now for processing_time(ms) column the calculation instead is starting from the 2 previous occurences of All Read threads finished flush the messages to Done bulk saving messages So in the example below: 2024-08-12 10:02:20,542 will have a processing_time from 10:02:19,417 to 10:02:20,542. 2024-08-12 10:02:19,417 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-12 10:02:20,526 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1  Count=1 2024-08-12 10:02:20,542 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 6 ms How can I also create a time series graph on same graph where x axis is time and then y axis is a bar chart of count column + line chart of new processing_time(ms)
If/when you have those macros on your SPL, you could expand those and see real SPL by pressing “Ctrl+Shift+e” on Windows. Then you can run those and see how those are working.
Ok. As splunk has announced end of support for SXML by end of this year, I don’t believe that there will be any change for this option.   That same limit is probably also in SplunkJS side, but maybe... See more...
Ok. As splunk has announced end of support for SXML by end of this year, I don’t believe that there will be any change for this option.   That same limit is probably also in SplunkJS side, but maybe it’s worth of time to check it!
I cannot see anything special here. Do you have UFs in other OS like windows or some Unix and if, have those the same issue? Can you post your indexer’s relevant inputs.conf output from btool too?
Hi @isoutamo, I've tried an option more than 100 but then I get directly an error in the dashboard and I can't save it. The problem is not only an export to pdf but also in the dashboard itself ther... See more...
Hi @isoutamo, I've tried an option more than 100 but then I get directly an error in the dashboard and I can't save it. The problem is not only an export to pdf but also in the dashboard itself there is a "next" button. So I think the app "betterpdf" can't solve this.
First of all, you need to realize that () in SPL has nothing to do with "macro".  Like in most languages, it is just a syntax to isolate terms.  On their own, they do nothing.  You will have to illus... See more...
First of all, you need to realize that () in SPL has nothing to do with "macro".  Like in most languages, it is just a syntax to isolate terms.  On their own, they do nothing.  You will have to illustrate the context where you see behavior difference. Let me first show you two examples: index = _internal earliest=-2h@h latest=-1h@h and index = _internal earliest=-2h@h latest=-1h@h () They give me the exact same result.
I don't know how to extract last sentence, but last line is easy.   | eval lastline = mvindex(split(Message, " "), -1)   Here is a data emulation you can play with and compare with real data   ... See more...
I don't know how to extract last sentence, but last line is easy.   | eval lastline = mvindex(split(Message, " "), -1)   Here is a data emulation you can play with and compare with real data   | makeresults | fields - _* | eval Message = mvappend("Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500", "Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire", "Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo", "Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look") | mvexpand Message ``` the above emulates index=example "House*" ```   Output using this emulation is Message lastline Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500 D://user/local/line500 Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire Error : someone stepped on the wire Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo D://user/local/line980 ,indo Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look Error : Simon said Look
Thank you for your quick response. I am literally asking what does searching with exactly a pair of parenthesis with nothing inside "()" do, as many Security Content searches include an empty macro f... See more...
Thank you for your quick response. I am literally asking what does searching with exactly a pair of parenthesis with nothing inside "()" do, as many Security Content searches include an empty macro for users to add whitelist/exceptions to their search. And by default these macros are empty. At first I thought they will do nothing, but when I put one such empty macro search, it actually returns with results. I am concern if these empty macro will mess up with my searches.
This might be easier   | eval modified_description = mvjoin(split(Description, "."), ".0")   Here is an emulation of your mock data   | makeresults format=csv = data="Description Aisle 1014 Ais... See more...
This might be easier   | eval modified_description = mvjoin(split(Description, "."), ".0")   Here is an emulation of your mock data   | makeresults format=csv = data="Description Aisle 1014 Aisle 1015 1102.1.1 1102.1.2" ``` the above emulates | input lookup dsa.csv ```   With this, the output is Description modified_description Aisle 1014 Aisle 1014 Aisle 1015 Aisle 1015 1102.1.1 1102.01.01 1102.1.2 1102.01.02
Maybe you can give more context?  Where are you using any of these?  If you cannot illustrate the real search command, at least post some mock code, or use index=_internal or something to demonstrate... See more...
Maybe you can give more context?  Where are you using any of these?  If you cannot illustrate the real search command, at least post some mock code, or use index=_internal or something to demonstrate that the two are different?  What is an "empty macro", anyway?
Just as you say, Splunk is not SQL.  So, forget join.  Please let us know what is the nature of the two searches, how close are they?  What are their search periods?  Most of the time, you shouldn't ... See more...
Just as you say, Splunk is not SQL.  So, forget join.  Please let us know what is the nature of the two searches, how close are they?  What are their search periods?  Most of the time, you shouldn't run two separate searches, but instead, combine the two into one search, then try to get the result you need from that one search. Criteria being if there are duplicate values in fieldA, only the row with the latest value is kept and each row with fieldB joined to fieldA on same ID. or if there are no values for fieldA, just join with null/blank value Ideally, we can also throw away all rows with col fieldB that have a timestamp earlier than fieldA but not a hard requirement if that adds too much complexity to the query Here, you talk about "latest" and "earlier".  But your mock data illustration contains no time information.  How are volunteers supposed to help? Now, if you MUST run the two searches separately, yes, there are ways to produce right join output in SPL without using join command which most Splunkers advise against.  But let's start at the ABCs of asking answerable questions in a data analytics forum. (That's right, this is not a SQL forum.)  Here are four golden rules that I call Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. Start from here.  
Is there any difference between a empty macro with    ()   or   ""   I see search with both both return results but do not behave the same as  index=* So what does these empty macro do actu... See more...
Is there any difference between a empty macro with    ()   or   ""   I see search with both both return results but do not behave the same as  index=* So what does these empty macro do actually? Any clues what logs or where I can further drill down this?
It doesn't work that way.  Splunk does not notify the user when their account is created.  It's up to the admin (you) to do that.
The simple calculation is daily ingestion times days times compression ratio (15%), but you must also include multipliers such as replication and datamodel acceleration.
@bowesmana Actually there is a lookup From which I want to extract such kind of pattern.  yesterday I performed so many hit and trial and finally the below one is working as expected. | input loo... See more...
@bowesmana Actually there is a lookup From which I want to extract such kind of pattern.  yesterday I performed so many hit and trial and finally the below one is working as expected. | input lookup dsa.csv | eval parts = split(Description, ".") | eval part1 = mvindex(parts, 0) | eval part2 = mvindex(parts, 1) | eval part3 = mvindex(parts, 2) | eval modified_part2= if(len(part2) == 1, "0" . part2, part2) | eval modified_part3 = if(len(part3) == 1, "0" . part3, part3) | eval modified_description = part1 . "." . modified_part2 . "." . modified_part3 | table Description, modified_description
This issue just happened to me this morning.. it was after I performed the data mapping. I was able to fix the issue without clearing my bookmarking etc by going to setup->review app configuration->u... See more...
This issue just happened to me this morning.. it was after I performed the data mapping. I was able to fix the issue without clearing my bookmarking etc by going to setup->review app configuration->update content->force update.
.
Hi,  I have a single search that produces the following table where fieldA and fieldB are arbitrary strings that may be duplicate. This is an exact representation of each event where each event... See more...
Hi,  I have a single search that produces the following table where fieldA and fieldB are arbitrary strings that may be duplicate. This is an exact representation of each event where each event may have a key "fieldA" or a key "fieldB" but not both but they always have an ID and Timestamp Timestamp ID fieldA fieldB 11115 1   "z" 11245 1 "a"   11378 1 "b"   11768 1   "d" 11879 1   "d" 12550 2 "c"   13580 2   "e" 15703 2   "f" 18690 3   "g" and I need help to transform the data as follows. ID fieldA fieldB 1 "b" "d" 1 "b" "d" 2 "c" "e" 2 "c" "f" 3   "g" Thanks to suggestion below, I have tried `stats latest(fieldA) list(fieldB)` but I would prefer to not have any multivalued fields For every distinct value for "fieldA", the latest record with that value would be kept and any records with that ID occuring before that record would be discard. There is no requirement to have 2 searches. Hope that makes it more clear and easier.
You're probably going to need streamstats - here's an example that demonstrates 5 printers with randomised printing, error and spooling statuses and it then uses streamstats to find each occurrence o... See more...
You're probably going to need streamstats - here's an example that demonstrates 5 printers with randomised printing, error and spooling statuses and it then uses streamstats to find each occurrence of printer_error and then counts the occurrences of spooling after the error - it handles multiple occurences of error followed by spooling | makeresults count=1000 | streamstats c | eval _time=now() - (c * 60) | sort _time | eval printer="Printer ".(random() % 5), r=random() % 100, status=case(r<3, "printing,error", r<90, "printing", r<100, "spooling") | fields - r c | search status IN ("printing,error","spooling") ``` Up to the above is just creating dummy data then removing all the printing events so just error and spooling are left ``` ``` Create an occurrence group for each failure ``` | streamstats count(eval(status="printing,error")) as occurrence by printer ``` Ignore the first as it's not relevant here ``` | where occurrence>0 ``` Now count spooling events by failure occurrence and save start/end times ``` | stats min(_time) as printer_error max(_time) as last_spooling count(eval(status="spooling")) as spooling by occurrence printer | fieldformat last_spooling=strftime(last_spooling, "%F %T") | fieldformat printer_error=strftime(printer_error, "%F %T") | sort printer printer_error Hopefully this will give you something to start with