All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi there, Splunk Community! First time poster! Whoo! Let me outline the situation, goal, and problem faced briefly: I have a field in a dataset called `detail.accountId` that is the number of an A... See more...
Hi there, Splunk Community! First time poster! Whoo! Let me outline the situation, goal, and problem faced briefly: I have a field in a dataset called `detail.accountId` that is the number of an AWS Account ID. My goal is to create a calculated field called "AccountName" for each `detail.accountId` ID that would theoretically look something like this: if(detail.accountId == "1234567890", "AccountX", "UnknownAccountName") The problem I'm facing is the eval expression is always coming out False, resulting in the AccountName field column to always display"UnknownAccountName". No matter if I use tostring(detail.accountId), trim(detail.accountId), match(detail.accountId), etc in the eval expression comparison, it's always false when the value "1234567890" definitely exists as the detail.accountId. Am I doing something incorrectly here that may be obvious to someone?   Thank you very much for the help! Tyler
Hello, I want to integrate Cloudflare with our Splunk Enterprise via logpull method of Cloudflare. In this method, via rest api I'll pull the logs from Cloudflare every 1 hour.   Can someone pleas... See more...
Hello, I want to integrate Cloudflare with our Splunk Enterprise via logpull method of Cloudflare. In this method, via rest api I'll pull the logs from Cloudflare every 1 hour.   Can someone please help me, how can I do that? Is there any add-on or app that I can use for calling the rest api? or is there any other methods that I can use?
If you want any sort of stat based on time, you should include it in the by clause. Try starting with something like this |tstats dc(host) as distinct_host where index=okta sourcetype="OktaIM2:log" ... See more...
If you want any sort of stat based on time, you should include it in the by clause. Try starting with something like this |tstats dc(host) as distinct_host where index=okta sourcetype="OktaIM2:log" by _time
Given that the statement you shared is not valid SPL, perhaps it would be more useful if you shared what you are actually doing (anonymised only where necessary) so we might be able to determine what... See more...
Given that the statement you shared is not valid SPL, perhaps it would be more useful if you shared what you are actually doing (anonymised only where necessary) so we might be able to determine what might be wrong
Please share some representative anonymised sample events in a code block How often do you want to sample the cpu used? Are Env and Tenant already extracted? Do you want the stat broken down by En... See more...
Please share some representative anonymised sample events in a code block How often do you want to sample the cpu used? Are Env and Tenant already extracted? Do you want the stat broken down by Env and Tenant as well as time or some other dimensions?
Hello, If I want to use a external file that contains 2 columns C and D and use those mappings to a existing query that displays a table with the value C (so it is like a case statement that gives a... See more...
Hello, If I want to use a external file that contains 2 columns C and D and use those mappings to a existing query that displays a table with the value C (so it is like a case statement that gives a D value for each value of C by checking a external csv file); what is wrong in this syntax - index="...*" "... events{}.name=ResourceCreated | bin _time span=1h | spath "events{}.tags.A" | dedup "events{}.tags.A" | inputcsv append=t Map.csv | stats D as D by C | table "events{}.tags.A" "events{}.tags.B" "events{}.tags.C" "events{}.tags.D" _time | collect index=_xyz_summary marker="search_name=\"test_new_query_4cols\"" I get error - Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'D' do not match. Note: empty field specifiers implies all fields, e.g. sum() == sum(*). I tried switch case but showing default value always. Thanks
Hi, i have a requirement to create single value visual with trendline.  I have looked at sample queries on Dashboard studio examples hub.  Below is my base query.         |tstats dc(host) as dist... See more...
Hi, i have a requirement to create single value visual with trendline.  I have looked at sample queries on Dashboard studio examples hub.  Below is my base query.         |tstats dc(host) as distinct_count where index=okta sourcetype="OktaIM2:log"   Expected result:  Something like this I have been trying below 2 searches but neither of two is showing the expected result.    |tstats dc(host) as distinct_host where index=okta sourcetype="OktaIM2:log" | chart count(distinct_host) by _time OR |tstats dc(host) as distinct_host where index=okta sourcetype="OktaIM2:log" | timechart count(distinct_host) by _time   If i try the below query without tstats,  it works but i need to use tstats from a performance point of view.     index=okta sourcetype="OktaIM2:log" | chart dc(host) by _time span=1h   Any suggestion how to generate single value trendline with tstats?  
We have multiple docker containers and there are some logs (created by our application, same log gets updated) inside those containers. We want to monitor those logs every 5 mins using Splunk UF whi... See more...
We have multiple docker containers and there are some logs (created by our application, same log gets updated) inside those containers. We want to monitor those logs every 5 mins using Splunk UF which is outside the container. Splunk UF will send data to Splunk indexer in another server. Can you please tell me options to do this.  
Hi @yuanliu , really appreciate your help and patience here. My requirements had changed and this is my current search query index=abc sourcetype = example_sourcetype | transaction startswit... See more...
Hi @yuanliu , really appreciate your help and patience here. My requirements had changed and this is my current search query index=abc sourcetype = example_sourcetype | transaction startswith="Saved messages to DB" endswith="Done bulk saving messages" keepevicted=t | eval no_msg_wait_time = mvcount(noMessageHandleCounter) * 1000 | fillnull no_msg_wait_time | rename duration as processing_time | eval _raw = mvindex(split(_raw, " "), -1) | rex "Done Bulk saving .+ used (?<db_bulk_write_time>\w+)" | eval processing_time = processing_time * 1000 | eval mq_read_time = processing_time - db_bulk_write_time - no_msg_wait_time | where db_bulk_write_time > 0 | rename processing_time as "processing_time(ms)", db_bulk_write_time as "db_bulk_write_time(ms)", no_msg_wait_time as "no_msg_wait_time(ms)", mq_read_time as "mq_read_time(ms)" | table _time, processing_time(ms), db_bulk_write_time(ms), no_msg_wait_time(ms), mq_read_time(ms), Count, _raw So now for processing_time(ms) column the calculation instead is starting from the 2 previous occurences of All Read threads finished flush the messages to Done bulk saving messages So in the example below: 2024-08-12 10:02:20,542 will have a processing_time from 10:02:19,417 to 10:02:20,542. 2024-08-12 10:02:19,417 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-12 10:02:20,526 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1  Count=1 2024-08-12 10:02:20,542 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 6 ms How can I also create a time series graph on same graph where x axis is time and then y axis is a bar chart of count column + line chart of new processing_time(ms)
If/when you have those macros on your SPL, you could expand those and see real SPL by pressing “Ctrl+Shift+e” on Windows. Then you can run those and see how those are working.
Ok. As splunk has announced end of support for SXML by end of this year, I don’t believe that there will be any change for this option.   That same limit is probably also in SplunkJS side, but maybe... See more...
Ok. As splunk has announced end of support for SXML by end of this year, I don’t believe that there will be any change for this option.   That same limit is probably also in SplunkJS side, but maybe it’s worth of time to check it!
I cannot see anything special here. Do you have UFs in other OS like windows or some Unix and if, have those the same issue? Can you post your indexer’s relevant inputs.conf output from btool too?
Hi @isoutamo, I've tried an option more than 100 but then I get directly an error in the dashboard and I can't save it. The problem is not only an export to pdf but also in the dashboard itself ther... See more...
Hi @isoutamo, I've tried an option more than 100 but then I get directly an error in the dashboard and I can't save it. The problem is not only an export to pdf but also in the dashboard itself there is a "next" button. So I think the app "betterpdf" can't solve this.
First of all, you need to realize that () in SPL has nothing to do with "macro".  Like in most languages, it is just a syntax to isolate terms.  On their own, they do nothing.  You will have to illus... See more...
First of all, you need to realize that () in SPL has nothing to do with "macro".  Like in most languages, it is just a syntax to isolate terms.  On their own, they do nothing.  You will have to illustrate the context where you see behavior difference. Let me first show you two examples: index = _internal earliest=-2h@h latest=-1h@h and index = _internal earliest=-2h@h latest=-1h@h () They give me the exact same result.
I don't know how to extract last sentence, but last line is easy.   | eval lastline = mvindex(split(Message, " "), -1)   Here is a data emulation you can play with and compare with real data   ... See more...
I don't know how to extract last sentence, but last line is easy.   | eval lastline = mvindex(split(Message, " "), -1)   Here is a data emulation you can play with and compare with real data   | makeresults | fields - _* | eval Message = mvappend("Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500", "Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire", "Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo", "Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look") | mvexpand Message ``` the above emulates index=example "House*" ```   Output using this emulation is Message lastline Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500 D://user/local/line500 Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire Error : someone stepped on the wire Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo D://user/local/line980 ,indo Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look Error : Simon said Look
Thank you for your quick response. I am literally asking what does searching with exactly a pair of parenthesis with nothing inside "()" do, as many Security Content searches include an empty macro f... See more...
Thank you for your quick response. I am literally asking what does searching with exactly a pair of parenthesis with nothing inside "()" do, as many Security Content searches include an empty macro for users to add whitelist/exceptions to their search. And by default these macros are empty. At first I thought they will do nothing, but when I put one such empty macro search, it actually returns with results. I am concern if these empty macro will mess up with my searches.
This might be easier   | eval modified_description = mvjoin(split(Description, "."), ".0")   Here is an emulation of your mock data   | makeresults format=csv = data="Description Aisle 1014 Ais... See more...
This might be easier   | eval modified_description = mvjoin(split(Description, "."), ".0")   Here is an emulation of your mock data   | makeresults format=csv = data="Description Aisle 1014 Aisle 1015 1102.1.1 1102.1.2" ``` the above emulates | input lookup dsa.csv ```   With this, the output is Description modified_description Aisle 1014 Aisle 1014 Aisle 1015 Aisle 1015 1102.1.1 1102.01.01 1102.1.2 1102.01.02
Maybe you can give more context?  Where are you using any of these?  If you cannot illustrate the real search command, at least post some mock code, or use index=_internal or something to demonstrate... See more...
Maybe you can give more context?  Where are you using any of these?  If you cannot illustrate the real search command, at least post some mock code, or use index=_internal or something to demonstrate that the two are different?  What is an "empty macro", anyway?
Just as you say, Splunk is not SQL.  So, forget join.  Please let us know what is the nature of the two searches, how close are they?  What are their search periods?  Most of the time, you shouldn't ... See more...
Just as you say, Splunk is not SQL.  So, forget join.  Please let us know what is the nature of the two searches, how close are they?  What are their search periods?  Most of the time, you shouldn't run two separate searches, but instead, combine the two into one search, then try to get the result you need from that one search. Criteria being if there are duplicate values in fieldA, only the row with the latest value is kept and each row with fieldB joined to fieldA on same ID. or if there are no values for fieldA, just join with null/blank value Ideally, we can also throw away all rows with col fieldB that have a timestamp earlier than fieldA but not a hard requirement if that adds too much complexity to the query Here, you talk about "latest" and "earlier".  But your mock data illustration contains no time information.  How are volunteers supposed to help? Now, if you MUST run the two searches separately, yes, there are ways to produce right join output in SPL without using join command which most Splunkers advise against.  But let's start at the ABCs of asking answerable questions in a data analytics forum. (That's right, this is not a SQL forum.)  Here are four golden rules that I call Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. Start from here.  
Is there any difference between a empty macro with    ()   or   ""   I see search with both both return results but do not behave the same as  index=* So what does these empty macro do actu... See more...
Is there any difference between a empty macro with    ()   or   ""   I see search with both both return results but do not behave the same as  index=* So what does these empty macro do actually? Any clues what logs or where I can further drill down this?