All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, i am using below search to monitor a status of process based on PID and usage  we have tried by stopping the service ,PID got changed how we can determine when it stopped, when using below... See more...
Hi All, i am using below search to monitor a status of process based on PID and usage  we have tried by stopping the service ,PID got changed how we can determine when it stopped, when using below search not getting OLD PID in the table, which was showing latest how can modify  index=Test1 host="testserver" (source=ps COMMAND=*cybAgent*) | stats latest(cpu_load_percent) as "CPU %", latest(PercentMemory) as "MEM %", latest(RSZ_KB) as "Resident Memory (KB)", latest(VSZ_KB) as "Virtual Memory (KB)",latest(PID) as "PID" ,latest(host) as "host" by COMMAND | eval Process_Status = case(isnotnull('CPU %') AND isnotnull('MEM %'), "Running", isnull('CPU %') AND isnull('MEM %'), "Not Running", 1=1, "Unknown") | table host,"CPU %", "MEM %", "Resident Memory (KB)", "Virtual Memory (KB)", Process_Status,COMMAND,PID | eval Process_Status = coalesce(Process_Status, "Unknown") | rename "CPU %" as "CPU %", "MEM %" as "MEM %" | fillnull value="N/A"
Hi @adent, Only one question, if you have events in more days, an event at previous day 23.59 is earlier than an event at second day 1.30, but calculatig min on the time value it's different. Do yo... See more...
Hi @adent, Only one question, if you have events in more days, an event at previous day 23.59 is earlier than an event at second day 1.30, but calculatig min on the time value it's different. Do you want to calculate min only on time or the earliest timestamp?  If min on time, you should run something like this: <your_search> | eval Time=strftime(_time,"%H.%M") | stats min(Time) AS Time BY event | append [ search <your_search> | eval Time=strftime(_time,"%H.%M") | stats min(Time) AS Time | eval Time=strftime(Time,"%H.%M") if the earliest timestamp, you could try something like this: <your_search> | stats earliest(_time) AS Time BY event | append [ search <your_search> | stats earlieste() AS _time | eval _time=strftime(_time,"%Y-%m-%d %H.%M") I could be more detailed if you share a sample of your logs and (if you already have) a search that you're using? Ciao. Giuseppe
Hi one old answer which describe how joins can/should do with splunk https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-i... See more...
Hi one old answer which describe how joins can/should do with splunk https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948 r. Ismo
I've had the exact same use case and found a work around. Added this just in case anyone else stumbles across it. Update the default option to noop, so it reverts to this when the checkbox is dese... See more...
I've had the exact same use case and found a work around. Added this just in case anyone else stumbles across it. Update the default option to noop, so it reverts to this when the checkbox is deselected <input type="checkbox" token="dedupresults"> <choice value="dedup src,dest">Dedup</choice> <default>noop</default> </input> And insert the token into your search.  .... | $dedupresults$ | ..... This will result in the search being either ... | dedup src,dest | .....  or ... | noop | .....
Using mvrange with time!  I think you also gave me this a long time ago for a different question, but with a unit instead of directly with _time. (mvexpand with info_max_time - info_min_time is too m... See more...
Using mvrange with time!  I think you also gave me this a long time ago for a different question, but with a unit instead of directly with _time. (mvexpand with info_max_time - info_min_time is too much.) Combining that lesson (thanks again!) and this formula, and working out some Splunk kinks, I can make it work with simple count. To start, I also realize that addinfo in makeresults will not work the same way as in a search command.  So, I modified my simulation strategy a little.  This will be my new baseline:   index = _internal | where _time < relative_time(now(), "-2h@h") ``` simulate zero-count buckets ``` | timechart span=1h count   The complete workaround will be index = _internal | where _time < relative_time(now(), "-2h@h") ``` simulate zero-count buckets ``` | bucket _time span=1h@h | chart count over _time | append [| makeresults | addinfo | eval hours = mvrange(0, round((info_max_time - info_min_time) / 3600)) | eval time = mvmap(hours, info_min_time + hours * 3600) | table time | mvexpand time | rename time as _time | bucket _time span=1h@h | eval count=0] | stats sum(count) as count by _time   Then, I should have noted in OP that my chart has a groupby clause.  So, I move my baseline to index = _internal sourcetype IN (splunkd, splunkd_access, splunkd_ui_access) | where _time < relative_time(now(), "-2h@h") ``` simulate zero-count buckets ``` | timechart span=1h count by sourcetype The workaround with groupby therefore is index = _internal sourcetype IN (splunkd, splunkd_access, splunkd_ui_access) ``` simulate zero-count buckets ``` | where _time < relative_time(now(), "-2h@h") | bucket _time span=1h@h | chart count over _time by sourcetype | append [| makeresults | addinfo | eval hours = mvrange(0, round((info_max_time - info_min_time) / 3600)) | eval time = mvmap(hours, info_min_time + hours * 3600) | table time | mvexpand time | rename time as _time | bucket _time span=1h@h | foreach splunkd, splunkd_access, splunkd_ui_access [eval <<FIELD>> = 0]] | chart sum(*) as * by _time This is super messy; it can be daunting if there are many values in groupby, or if values are unpredictable.  As you said, I should try to stick to timechart when dealing with time series.
It is interesting that you pose the question in regard to SPL instead of a data structure consideration.  But still, while array is a viable structure for many applications, SPL is not the only langu... See more...
It is interesting that you pose the question in regard to SPL instead of a data structure consideration.  But still, while array is a viable structure for many applications, SPL is not the only language that has to go to extra length to handle.  If you have a choice, and if you don't care much about front-end compute, hash is easier on SPL. (And again, easier in some use cases with other languages.) I do want to suggest, though, you drop the nested listX.id node because that is redundant. (Lastly, I also recommend that you illustrate with compliant JSON.  This makes volunteers work easier.) {"host": "test", "list1" :{                   "ip": "192.168.0.1",                    "device": "laptop",                    "value": 123              }, "list2" : {                    "ip": "192.168.0.2",                    "device": "phone",                    "value": 1223                    }, "list3":   {                    "ip": "192.168.0.3",                    "device": "desktop",                    "value": 99                    } }
when i made a log for HEC with json array, im not sure what is more better way to use spl. can someone advise me please? way 1.  {host: 'test' lists : [{                    id: ' list1'        ... See more...
when i made a log for HEC with json array, im not sure what is more better way to use spl. can someone advise me please? way 1.  {host: 'test' lists : [{                    id: ' list1'                    ip: '192.168.0.1'                    device: 'laptop'                    value: 123                    },                  {                    id: ' list2'                    ip: '192.168.0.2'                    device: 'phone'                    value: 1223                    },                  {                    id: ' list3'                    ip: '192.168.0.3'                    device: 'desktop'                    value: 99                    }, ]}   way2. {host: 'test' list1 :{                    id: ' list1'                    ip: '192.168.0.1'                    device: 'laptop'                    value: 123              } list2 : {                    id: ' list2'                    ip: '192.168.0.2'                    device: 'phone'                    value: 1223                    }, list3:   {                    id: ' list3'                    ip: '192.168.0.3'                    device: 'desktop'                    value: 99                    }, ]}  
_Raw json format is below { "test-03": { "field1": 97869, "field2": 179771, "field3": "test-03", "traffics": 1070140210 }, "test-08": { "field1": 53094, "field2": 103840, "field3": "test-0... See more...
_Raw json format is below { "test-03": { "field1": 97869, "field2": 179771, "field3": "test-03", "traffics": 1070140210 }, "test-08": { "field1": 53094, "field2": 103840, "field3": "test-08", "traffics": 998807234 }, "test-09": { "field1": 145655, "field2": 250518, "field3": "test-09", "traffics": 2212423288 }, "test-10": { "field1": 83663, "field2": 151029, "field3": "test-10", "traffics": 762554139 }, "k": 63314 } when i use  timechart avg(test*.traffics) , it works   but number was so huge, so i tried to change |eval test*.traffics=round(test*.traffics/1024,2) but it didnt work. can anybody help it please
Something like this should work. I called the lookup db_names.csv ... change that to whatever your actual lookup is named. Everything above the comment just emulates the data you gave. | makeresults... See more...
Something like this should work. I called the lookup db_names.csv ... change that to whatever your actual lookup is named. Everything above the comment just emulates the data you gave. | makeresults count=1 | eval _raw="DatabaseName,Instance,CPUUtilization A,A1,10 A,A2,20 C,C1,40 C,C2,50 D,D,60" | multikv forceheader=1 | fields - _time, _raw, linecount ```^^^^ This emulates the data you gave ^^^^``` | eval inst_cpu=Instance+"#"+CPUUtilization | fields - Instance CPUUtilization | inputlookup db_names.csv append=true ```<-- change the lookup name here``` | stats list(inst_cpu) as inst_cpu by DatabaseName | mvexpand inst_cpu | eval Instance=mvindex(split(inst_cpu,"#"), 0) | eval CPUUtilization=mvindex(split(inst_cpu,"#"), 1) | fillnull value="NULL" Instance CPUUtilization | fields - inst_cpu    
You could do something like this. Imagine a lookup that looks like this. ip_lookup.csv ip 10.10.53.22 127.0.0.1 192.168.0.54     index=myindex ([| inputlookup ip_lookup.csv | stat... See more...
You could do something like this. Imagine a lookup that looks like this. ip_lookup.csv ip 10.10.53.22 127.0.0.1 192.168.0.54     index=myindex ([| inputlookup ip_lookup.csv | stats values(eval("ip=\""+src_ip+"\"")) as search | eval search=mvjoin(search, " OR ")])   This produces ...   index=myindex (src_ip="10.10.53.22" OR src_ip="127.0.0.1" OR src_ip="192.168.0.54")   ... or ...   index=myindex ([| inputlookup ip_lookup.csv | stats values(eval("src_ip!=\""+ip+"\"")) as search | eval search=mvjoin(search, " AND ")])   This produces ...   index=myindex (src_ip!="10.10.53.22" AND src_ip!="127.0.0.1" AND src_ip!="192.168.0.54")     You could even just put the sub-search into a macro that references the lookup to make using it easier for reuse. An example would be like this. Macro name: my_ip_macro Macro definition:   [| inputlookup ip_lookup.csv | stats values(eval("src_ip=\""+ip+"\"")) as search | eval search=mvjoin(search, " OR ")]     Search using the macro:   index=myindex `my_ip_macro`    
Hi all, I have a panel with 4 columns and I configure the panel settings in "htmlPanel1A".   <panel id="htmlPanel1A">   Due to different values in each columns, I sometimes found 3 columns l... See more...
Hi all, I have a panel with 4 columns and I configure the panel settings in "htmlPanel1A".   <panel id="htmlPanel1A">   Due to different values in each columns, I sometimes found 3 columns look like left-aligned while the rest 1 column looks like right-aligned content. I think the problem comes from center-aligned in the default setting. I would like to change into right-aligned for all columns but remains the title of the panel is center-aligned. Is there any suggestion on my CSS configuration to fulfill the purpose?   <panel depends="$alwaysHideCSS$"> <html> <div> <style> /* define some default colors */ .dashboard-row .dashboard-panel{ background-color:lightcyan !important; } .dashboard-panel h2{ background:cyan !important; color:FFFFFF !important; text-align: center !important; font-weight: bold !important; border-top-right-radius: 12px; border-top-left-radius: 12px; } /* override default colors by panel id */ #htmlPanel1 h2,#htmlPanel1A h2{ color:#3C444D !important; background-color:#FFFFFF !important; } ..... </style> </div> </html> </panel>             Thank you so much.
I am trying to get individual values and add a summary row with the minimum value. In this case I have 3 times and want the output to have all three times and create a minimum time row (labelname=min... See more...
I am trying to get individual values and add a summary row with the minimum value. In this case I have 3 times and want the output to have all three times and create a minimum time row (labelname=min). event    _time a          10:00 b           11:00 c            10:30 min    10:00
What do you mean by pulling the _raw? Do you mean "pulling" as in removing _raw from the fields list? Are you using the collect command to add the events into another index? If you do and don't expli... See more...
What do you mean by pulling the _raw? Do you mean "pulling" as in removing _raw from the fields list? Are you using the collect command to add the events into another index? If you do and don't explicitly set a sourcetype then you will not incur a licensing hit for the data copied to the other index.
This should work. | makeresults count=1 | eval _raw="System,_time,PP_elapsed_Time,CC_elapsed_Time Member,2023-09-10,1.52,4 Member,2023-09-11,2,2.6" | multikv forceheader=1 | fields - _time, _raw, li... See more...
This should work. | makeresults count=1 | eval _raw="System,_time,PP_elapsed_Time,CC_elapsed_Time Member,2023-09-10,1.52,4 Member,2023-09-11,2,2.6" | multikv forceheader=1 | fields - _time, _raw, linecount | rename time as _time | table System _time PP_elapsed_Time CC_elapsed_Time ```^^^^ Above is just creating example data ^^^^``` | eval SysTime = System + ":" + _time | fields - System, _time | untable SysTime Reason Value | eval System = mvindex(split(SysTime,":"), 0) | eval _time = mvindex(split(SysTime,":"), 1) | fields - SysTime
I have a very long SQL query executing for every 900 seconds and amount of events are in millions. There're ~ 10 "left joins" in SQL query, which seems like filtering events with fields to get output... See more...
I have a very long SQL query executing for every 900 seconds and amount of events are in millions. There're ~ 10 "left joins" in SQL query, which seems like filtering events with fields to get output creating load on server and on db connect server.  I wanted to use "Catalog", "Schema" and "Table" options in db input where i want to choose or add multiple left joins. Do we have any documentation, i didn't find any documentation in Splunk Docs and same in community. Much appreciated for any explanation or documentation "How to use left join using schema in db connect input". Thanks in advance!!
Hello, I'm working in splunk enterprise 8.2.4 I have the below search index=Red msg="*COMPLETED Task*” | spath output=logMessage path=msg | rex field=logMessage "Message\|[^\t\{]*(?<json>{[^\t]+})"... See more...
Hello, I'm working in splunk enterprise 8.2.4 I have the below search index=Red msg="*COMPLETED Task*” | spath output=logMessage path=msg | rex field=logMessage "Message\|[^\t\{]*(?<json>{[^\t]+})" | eval PP_elapsedTime=spath(json, “PPInfo.PP.elapsedTime") | eval CC_elapsedTime=spath(json, “CCInfo.CC.elapsedTime") | eval System = “Member” | table System, PP_elapsedTime, CC_elapsedTime Current output: System _time PP_elapsed_Time CC_elapsed_Time Member 2023-09-10 1.52 4 Member 2023-09-11 2 2.6   I want the output to read: System _time Reason Value Member 2023-09-10 PP_elapsed_Time 1.52 Member 2023-09-10 CC_elapsed_Time 4 Member 2023-09-11 PP_elapsed_Time 2 Member 2023-09-11 CC_elapsed_Time 2.6   I'm not sure where to go from here, any feedback would be appreciated.   
Give this a try index=_internal source=*var/log/splunk/search_messages.log
I'm not sure why my original reply isn't showing up...but it is now located here in a totally different place but under a copy of this post:   Re: Developing reliable searches dealing with even... ... See more...
I'm not sure why my original reply isn't showing up...but it is now located here in a totally different place but under a copy of this post:   Re: Developing reliable searches dealing with even... - Splunk Community
Here are a couple posts that cover this concept: Solved: Search for items not matching values from a lookup - Splunk Community Solved: Compare search results with a lookup table and ide... - Splunk... See more...
Here are a couple posts that cover this concept: Solved: Search for items not matching values from a lookup - Splunk Community Solved: Compare search results with a lookup table and ide... - Splunk Community  
The timechart  generates time series for selected time range, so you get data for full time window, even when there are no results for certain buckets. The chart command, like stats command, generat... See more...
The timechart  generates time series for selected time range, so you get data for full time window, even when there are no results for certain buckets. The chart command, like stats command, generates statistics for available _time buckets only, so if a time bucket has 0 events, it'll will not show it (can't generate if it's not present). There are workaround to get full time series with chart as well but it's not that pretty. If timechart is an option, use that.  Here is the workaround query: index = _internal ``` simulate zero-count buckets ``` | bucket _time span=5m | chart count over _time | append [| makeresults | addinfo | eval time=mvrange(info_min_time, info_max_time+1,300) | rename comment as "third argument should in seconds and same as the span you selected for chart" | table time | mvexpand time | rename time as _time | eval count=0] | chart sum(count) as count by _time