All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

when i made a log for HEC with json array, im not sure what is more better way to use spl. can someone advise me please? way 1.  {host: 'test' lists : [{                    id: ' list1'        ... See more...
when i made a log for HEC with json array, im not sure what is more better way to use spl. can someone advise me please? way 1.  {host: 'test' lists : [{                    id: ' list1'                    ip: '192.168.0.1'                    device: 'laptop'                    value: 123                    },                  {                    id: ' list2'                    ip: '192.168.0.2'                    device: 'phone'                    value: 1223                    },                  {                    id: ' list3'                    ip: '192.168.0.3'                    device: 'desktop'                    value: 99                    }, ]}   way2. {host: 'test' list1 :{                    id: ' list1'                    ip: '192.168.0.1'                    device: 'laptop'                    value: 123              } list2 : {                    id: ' list2'                    ip: '192.168.0.2'                    device: 'phone'                    value: 1223                    }, list3:   {                    id: ' list3'                    ip: '192.168.0.3'                    device: 'desktop'                    value: 99                    }, ]}  
_Raw json format is below { "test-03": { "field1": 97869, "field2": 179771, "field3": "test-03", "traffics": 1070140210 }, "test-08": { "field1": 53094, "field2": 103840, "field3": "test-0... See more...
_Raw json format is below { "test-03": { "field1": 97869, "field2": 179771, "field3": "test-03", "traffics": 1070140210 }, "test-08": { "field1": 53094, "field2": 103840, "field3": "test-08", "traffics": 998807234 }, "test-09": { "field1": 145655, "field2": 250518, "field3": "test-09", "traffics": 2212423288 }, "test-10": { "field1": 83663, "field2": 151029, "field3": "test-10", "traffics": 762554139 }, "k": 63314 } when i use  timechart avg(test*.traffics) , it works   but number was so huge, so i tried to change |eval test*.traffics=round(test*.traffics/1024,2) but it didnt work. can anybody help it please
Something like this should work. I called the lookup db_names.csv ... change that to whatever your actual lookup is named. Everything above the comment just emulates the data you gave. | makeresults... See more...
Something like this should work. I called the lookup db_names.csv ... change that to whatever your actual lookup is named. Everything above the comment just emulates the data you gave. | makeresults count=1 | eval _raw="DatabaseName,Instance,CPUUtilization A,A1,10 A,A2,20 C,C1,40 C,C2,50 D,D,60" | multikv forceheader=1 | fields - _time, _raw, linecount ```^^^^ This emulates the data you gave ^^^^``` | eval inst_cpu=Instance+"#"+CPUUtilization | fields - Instance CPUUtilization | inputlookup db_names.csv append=true ```<-- change the lookup name here``` | stats list(inst_cpu) as inst_cpu by DatabaseName | mvexpand inst_cpu | eval Instance=mvindex(split(inst_cpu,"#"), 0) | eval CPUUtilization=mvindex(split(inst_cpu,"#"), 1) | fillnull value="NULL" Instance CPUUtilization | fields - inst_cpu    
You could do something like this. Imagine a lookup that looks like this. ip_lookup.csv ip 10.10.53.22 127.0.0.1 192.168.0.54     index=myindex ([| inputlookup ip_lookup.csv | stat... See more...
You could do something like this. Imagine a lookup that looks like this. ip_lookup.csv ip 10.10.53.22 127.0.0.1 192.168.0.54     index=myindex ([| inputlookup ip_lookup.csv | stats values(eval("ip=\""+src_ip+"\"")) as search | eval search=mvjoin(search, " OR ")])   This produces ...   index=myindex (src_ip="10.10.53.22" OR src_ip="127.0.0.1" OR src_ip="192.168.0.54")   ... or ...   index=myindex ([| inputlookup ip_lookup.csv | stats values(eval("src_ip!=\""+ip+"\"")) as search | eval search=mvjoin(search, " AND ")])   This produces ...   index=myindex (src_ip!="10.10.53.22" AND src_ip!="127.0.0.1" AND src_ip!="192.168.0.54")     You could even just put the sub-search into a macro that references the lookup to make using it easier for reuse. An example would be like this. Macro name: my_ip_macro Macro definition:   [| inputlookup ip_lookup.csv | stats values(eval("src_ip=\""+ip+"\"")) as search | eval search=mvjoin(search, " OR ")]     Search using the macro:   index=myindex `my_ip_macro`    
Hi all, I have a panel with 4 columns and I configure the panel settings in "htmlPanel1A".   <panel id="htmlPanel1A">   Due to different values in each columns, I sometimes found 3 columns l... See more...
Hi all, I have a panel with 4 columns and I configure the panel settings in "htmlPanel1A".   <panel id="htmlPanel1A">   Due to different values in each columns, I sometimes found 3 columns look like left-aligned while the rest 1 column looks like right-aligned content. I think the problem comes from center-aligned in the default setting. I would like to change into right-aligned for all columns but remains the title of the panel is center-aligned. Is there any suggestion on my CSS configuration to fulfill the purpose?   <panel depends="$alwaysHideCSS$"> <html> <div> <style> /* define some default colors */ .dashboard-row .dashboard-panel{ background-color:lightcyan !important; } .dashboard-panel h2{ background:cyan !important; color:FFFFFF !important; text-align: center !important; font-weight: bold !important; border-top-right-radius: 12px; border-top-left-radius: 12px; } /* override default colors by panel id */ #htmlPanel1 h2,#htmlPanel1A h2{ color:#3C444D !important; background-color:#FFFFFF !important; } ..... </style> </div> </html> </panel>             Thank you so much.
I am trying to get individual values and add a summary row with the minimum value. In this case I have 3 times and want the output to have all three times and create a minimum time row (labelname=min... See more...
I am trying to get individual values and add a summary row with the minimum value. In this case I have 3 times and want the output to have all three times and create a minimum time row (labelname=min). event    _time a          10:00 b           11:00 c            10:30 min    10:00
What do you mean by pulling the _raw? Do you mean "pulling" as in removing _raw from the fields list? Are you using the collect command to add the events into another index? If you do and don't expli... See more...
What do you mean by pulling the _raw? Do you mean "pulling" as in removing _raw from the fields list? Are you using the collect command to add the events into another index? If you do and don't explicitly set a sourcetype then you will not incur a licensing hit for the data copied to the other index.
This should work. | makeresults count=1 | eval _raw="System,_time,PP_elapsed_Time,CC_elapsed_Time Member,2023-09-10,1.52,4 Member,2023-09-11,2,2.6" | multikv forceheader=1 | fields - _time, _raw, li... See more...
This should work. | makeresults count=1 | eval _raw="System,_time,PP_elapsed_Time,CC_elapsed_Time Member,2023-09-10,1.52,4 Member,2023-09-11,2,2.6" | multikv forceheader=1 | fields - _time, _raw, linecount | rename time as _time | table System _time PP_elapsed_Time CC_elapsed_Time ```^^^^ Above is just creating example data ^^^^``` | eval SysTime = System + ":" + _time | fields - System, _time | untable SysTime Reason Value | eval System = mvindex(split(SysTime,":"), 0) | eval _time = mvindex(split(SysTime,":"), 1) | fields - SysTime
I have a very long SQL query executing for every 900 seconds and amount of events are in millions. There're ~ 10 "left joins" in SQL query, which seems like filtering events with fields to get output... See more...
I have a very long SQL query executing for every 900 seconds and amount of events are in millions. There're ~ 10 "left joins" in SQL query, which seems like filtering events with fields to get output creating load on server and on db connect server.  I wanted to use "Catalog", "Schema" and "Table" options in db input where i want to choose or add multiple left joins. Do we have any documentation, i didn't find any documentation in Splunk Docs and same in community. Much appreciated for any explanation or documentation "How to use left join using schema in db connect input". Thanks in advance!!
Hello, I'm working in splunk enterprise 8.2.4 I have the below search index=Red msg="*COMPLETED Task*” | spath output=logMessage path=msg | rex field=logMessage "Message\|[^\t\{]*(?<json>{[^\t]+})"... See more...
Hello, I'm working in splunk enterprise 8.2.4 I have the below search index=Red msg="*COMPLETED Task*” | spath output=logMessage path=msg | rex field=logMessage "Message\|[^\t\{]*(?<json>{[^\t]+})" | eval PP_elapsedTime=spath(json, “PPInfo.PP.elapsedTime") | eval CC_elapsedTime=spath(json, “CCInfo.CC.elapsedTime") | eval System = “Member” | table System, PP_elapsedTime, CC_elapsedTime Current output: System _time PP_elapsed_Time CC_elapsed_Time Member 2023-09-10 1.52 4 Member 2023-09-11 2 2.6   I want the output to read: System _time Reason Value Member 2023-09-10 PP_elapsed_Time 1.52 Member 2023-09-10 CC_elapsed_Time 4 Member 2023-09-11 PP_elapsed_Time 2 Member 2023-09-11 CC_elapsed_Time 2.6   I'm not sure where to go from here, any feedback would be appreciated.   
Give this a try index=_internal source=*var/log/splunk/search_messages.log
I'm not sure why my original reply isn't showing up...but it is now located here in a totally different place but under a copy of this post:   Re: Developing reliable searches dealing with even... ... See more...
I'm not sure why my original reply isn't showing up...but it is now located here in a totally different place but under a copy of this post:   Re: Developing reliable searches dealing with even... - Splunk Community
Here are a couple posts that cover this concept: Solved: Search for items not matching values from a lookup - Splunk Community Solved: Compare search results with a lookup table and ide... - Splunk... See more...
Here are a couple posts that cover this concept: Solved: Search for items not matching values from a lookup - Splunk Community Solved: Compare search results with a lookup table and ide... - Splunk Community  
The timechart  generates time series for selected time range, so you get data for full time window, even when there are no results for certain buckets. The chart command, like stats command, generat... See more...
The timechart  generates time series for selected time range, so you get data for full time window, even when there are no results for certain buckets. The chart command, like stats command, generates statistics for available _time buckets only, so if a time bucket has 0 events, it'll will not show it (can't generate if it's not present). There are workaround to get full time series with chart as well but it's not that pretty. If timechart is an option, use that.  Here is the workaround query: index = _internal ``` simulate zero-count buckets ``` | bucket _time span=5m | chart count over _time | append [| makeresults | addinfo | eval time=mvrange(info_min_time, info_max_time+1,300) | rename comment as "third argument should in seconds and same as the span you selected for chart" | table time | mvexpand time | rename time as _time | eval count=0] | chart sum(count) as count by _time
The join command is an inefficient way to combine datasets.  Alternative commands are described in the Search Reference manual (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Join... See more...
The join command is an inefficient way to combine datasets.  Alternative commands are described in the Search Reference manual (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Join#Alternative_commands). Splunk has a manual for SQL users.  See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk
For your requirement, left join may not be ideal. Try this alternate implementation (replace make results query with your lookup/data query): | makeresults | eval DatabaseName=split("A B C"," ") | m... See more...
For your requirement, left join may not be ideal. Try this alternate implementation (replace make results query with your lookup/data query): | makeresults | eval DatabaseName=split("A B C"," ") | mvexpand DatabaseName | table DatabaseName | eval from="Lookup" | append [| makeresults | eval DatabaseName=split("A A1 10#A A2 20#C C1 40#C C2 50#D D 60","#") | mvexpand DatabaseName | table DatabaseName | rex field=DatabaseName "^(?<DatabaseName>\S+)\s+(?<Instance>\S+)\s+(?<CPUUtilization>\S+)$" | eval from="search"] | eventstats values(from) as from by DatabaseName | where isnotnull(mvfilter(match(from,"Lookup"))) | foreach CPUUtilization Instance [| eval "<<FIELD>>"=coalesce('<<FIELD>>',if(mvcount(from)=1 AND from="Lookup","NULL",null()))] | stats count by DatabaseName CPUUtilization Instance | table DatabaseName CPUUtilization Instance
I believe that's covered by the schedule_search capability.
How do I use a lookup table to filter events based on a list of known malicious IP addresses (in CIDR format), or to exclude events from known internal IP ranges.
Thanks. Changing the 'enable_install_app=true' fixed the "Using deprecated capabilities for write errors".
Below is our Requirement Lookup file has just one column DatabaseName, this is the left dataset DatabaseName A B C   My Search is for metrics on databases and ha s multiple rows, ... See more...
Below is our Requirement Lookup file has just one column DatabaseName, this is the left dataset DatabaseName A B C   My Search is for metrics on databases and ha s multiple rows, this is the right dataset DatabaseName Instance CPUUtilization A A1 10 A A2 20 C C1 40 C C2 50 D D 60   Expected Result is this after left join DatabaseName Instance CPUUtilization A A1 10 A A2 20 B NULL NULL C C1 40 C C2 50   But when I join using DatabaseName, I am getting only three records, 1 for A, 1 for B with NULL and 1 for C My background is SQL and for me left join is all from left data set and all matching from right data set. So please suggest me how I can achive this.