dtakacssplunk's Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

dtakacssplunk's Posts

how do I do that?  I'm not familiar with external functions.
I would like to create a column that tells me the variance for the array        | makeresults | eval raw="1 session1 O1 S1 5 6 7 9# 2 session2 O2 S2 99 55 77 999# 3 session3 O1 S1 995 55 77 99... See more...
I would like to create a column that tells me the variance for the array        | makeresults | eval raw="1 session1 O1 S1 5 6 7 9# 2 session2 O2 S2 99 55 77 999# 3 session3 O1 S1 995 55 77 999# 4 session4 O1 S1 1 2 4 1#" | makemv raw delim="#" | mvexpand raw | rename raw as _raw | rex "(?<User>\S+)\s+(?<ClientSession>\S+)\s+(?<Organization>\S+)\s+(?<Section>\S+)\s+(?<downloadspeed_file1>\S+)\s+(?<downloadspeed_file2>\S+)\s+(?<downloadspeed_file3>\S+)\s+(?<downloadspeed_file4>\S+)" | eval downloadSpeedsArray=json_array(downloadspeed_file1, downloadspeed_file2, downloadspeed_file3, downloadspeed_file4) | table User ClientSession Organization Section downloadspeed_file1, downloadspeed_file2, downloadspeed_file3, downloadspeed_file4 downloadSpeedsArray variance     can you please help me how to calculate this column.  Is the variance normalized across rows?  
splunk team is there any guidance you can provide on this.  with legacy dashboards this was easy and it's documented how do you do this with the new dashboard studio?
How do I set the visibility of panels in Dashboard Studio?   I was going to create a multi select option but how can I tie visibility of a panel to the selection being made?
suppose my logs have fields A=a1..aN, B=b1..bN, C=c1..cN and I see an increase in number of requests, i.e index=* | bucket _time span=1h | stats count by _time Is there a special query that I can ... See more...
suppose my logs have fields A=a1..aN, B=b1..bN, C=c1..cN and I see an increase in number of requests, i.e index=* | bucket _time span=1h | stats count by _time Is there a special query that I can execute which allows me to easily figure out which dimension / value is contributing to the increase most? e.g. the outcome would be A=a423 has increased the traffic the most
I want them to see the context before and after. To help convey the significance 
I would like to generate a splunk URL that has: 1) the query to render 2) the visualization to render 3) some query annotations. the query, the visualizations are all fine.  I'm not sure how to g... See more...
I would like to generate a splunk URL that has: 1) the query to render 2) the visualization to render 3) some query annotations. the query, the visualizations are all fine.  I'm not sure how to get the annotations into the query. For example: take a look at the following query which I have, I can render a proper chart proper visualization (can cut and paste the url and someone can re-render the same).... but how about adding an annotation to the chart (I put red lines for a particular timeline) which I want the user to focus at.  Is it possible to say please draw a line at this epoch time, and another line at this epoch time?      
I have data in the following form: field A,    field B(this is an array) a              {"k":1}                 {"k":2}                 {"k":3} b              {"k":1}                 {"k":1}  ... See more...
I have data in the following form: field A,    field B(this is an array) a              {"k":1}                 {"k":2}                 {"k":3} b              {"k":1}                 {"k":1}                 {"k":1} field B is an array, I want to produce table like this field A, sumB a     6 b    3 what is the way to extract the values and add them up? my thinking was to do  | eval value=spath(fieldB, "k")  and I was expecting values to have array 1,2,3 and 1,1,1 but they did not
If I have the data in following format: time session event t1 session1 actionA t2 session1 actionB t3 session1 actionC... See more...
If I have the data in following format: time session event t1 session1 actionA t2 session1 actionB t3 session1 actionC t4 session1 actionA t5 session2 actionB t6 session2 actionC want to write a splunk query to transform it to this format: from to count timetaken actionA actionB 1 (t2-t1) actionB actionC. 2 (t3-t2) + (t5+t6) actionC actionA 1 (t4-t3) can someone recommend an expression for this?
I am trying to calculate hash on splunk log line. How come sha256(_raw). Does not give result Other fields sha256 works
I have to run a query periodically like this. The query seems to run pretty slow. Are there ways to optimize such a query? starttime=04/17/2019:03:00:00 endtime=04/17/2019:04:00:00 index=* log... See more...
I have to run a query periodically like this. The query seems to run pretty slow. Are there ways to optimize such a query? starttime=04/17/2019:03:00:00 endtime=04/17/2019:04:00:00 index=* logline=loglinetype (_raw="*event1*" OR _raw="*event2*" OR _raw="*event3*") | spath input=lineinfo output=event path=event | search (event=event1 OR event=event2 OR event=event3) | eval duration=spath(payload,"duration") | eval duration_0to500=case(duration>0 AND duration<=500, 1, true(), 0) | eval duration_500to1000=case(duration>500 AND duration<=1000, 1, true(), 0) | eval duration_1000to1500=case(duration>1000 AND duration<=1500, 1, true(), 0) | eval duration_1500to2000=case(duration>1500 AND duration<=2000, 1, true(), 0) | eval duration_2000to2500=case(duration>2000 AND duration<=2500, 1, true(), 0) | eval duration_2500to3000=case(duration>2500 AND duration<=3000, 1, true(), 0) | eval duration_3000to3500=case(duration>3000 AND duration<=3500, 1, true(), 0) | eval duration_3500to4000=case(duration>3500 AND duration<=4000, 1, true(), 0) | eval duration_4000to4500=case(duration>4000 AND duration<=4500, 1, true(), 0) | eval duration_4500to5000=case(duration>4500 AND duration<=5000, 1, true(), 0) | eval duration_5000to5500=case(duration>5000 AND duration<=5500, 1, true(), 0) | eval duration_5500to6000=case(duration>5500 AND duration<=6000, 1, true(), 0) | eval duration_6000to6500=case(duration>6000 AND duration<=6500, 1, true(), 0) | eval duration_6500to7000=case(duration>6500 AND duration<=7000, 1, true(), 0) | eval duration_7000to7500=case(duration>7000 AND duration<=7500, 1, true(), 0) | eval duration_7500to8000=case(duration>7500 AND duration<=8000, 1, true(), 0) | eval duration_8000to8500=case(duration>8000 AND duration<=8500, 1, true(), 0) | eval duration_8500to9000=case(duration>8500 AND duration<=9000, 1, true(), 0) | eval duration_9000to9500=case(duration>9000 AND duration<=9500, 1, true(), 0) | eval duration_9500to10000=case(duration>9500 AND duration<=10000, 1, true(), 0) | eval duration_10000to10500=case(duration>10000 AND duration<=10500, 1, true(), 0) | eval duration_10500to11000=case(duration>10500 AND duration<=11000, 1, true(), 0) | eval duration_11000to11500=case(duration>11000 AND duration<=11500, 1, true(), 0) | eval duration_11500to12000=case(duration>11500 AND duration<=12000, 1, true(), 0) | eval duration_12000to12500=case(duration>12000 AND duration<=12500, 1, true(), 0) | eval duration_12500to13000=case(duration>12500 AND duration<=13000, 1, true(), 0) | eval duration_13000to13500=case(duration>13000 AND duration<=13500, 1, true(), 0) | eval duration_13500to14000=case(duration>13500 AND duration<=14000, 1, true(), 0) | eval duration_14000to14500=case(duration>14000 AND duration<=14500, 1, true(), 0) | eval duration_14500to15000=case(duration>14500 AND duration<=15000, 1, true(), 0) | eval duration_15000to15500=case(duration>15000 AND duration<=15500, 1, true(), 0) | eval duration_15500to16000=case(duration>15500 AND duration<=16000, 1, true(), 0) | eval duration_16000to16500=case(duration>16000 AND duration<=16500, 1, true(), 0) | eval duration_16500to17000=case(duration>16500 AND duration<=17000, 1, true(), 0) | eval duration_17000to17500=case(duration>17000 AND duration<=17500, 1, true(), 0) | eval duration_17500to18000=case(duration>17500 AND duration<=18000, 1, true(), 0) | eval duration_18000to18500=case(duration>18000 AND duration<=18500, 1, true(), 0) | eval duration_18500to19000=case(duration>18500 AND duration<=19000, 1, true(), 0) | eval duration_19000to19500=case(duration>19000 AND duration<=19500, 1, true(), 0) | eval duration_19500to20000=case(duration>19500 AND duration<=20000, 1, true(), 0) | eval duration_overflow=case(duration>20000, 1, true(), 0) | bucket _time span=60m | stats sum(duration_0to500) as hist_duration_0to500, sum(duration_500to1000) as hist_duration_500to1000, sum(duration_1000to1500) as hist_duration_1000to1500, sum(duration_1500to2000) as hist_duration_1500to2000, sum(duration_2000to2500) as hist_duration_2000to2500, sum(duration_2500to3000) as hist_duration_2500to3000, sum(duration_3000to3500) as hist_duration_3000to3500, sum(duration_3500to4000) as hist_duration_3500to4000, sum(duration_4000to4500) as hist_duration_4000to4500, sum(duration_4500to5000) as hist_duration_4500to5000, sum(duration_5000to5500) as hist_duration_5000to5500, sum(duration_5500to6000) as hist_duration_5500to6000, sum(duration_6000to6500) as hist_duration_6000to6500, sum(duration_6500to7000) as hist_duration_6500to7000, sum(duration_7000to7500) as hist_duration_7000to7500, sum(duration_7500to8000) as hist_duration_7500to8000, sum(duration_8000to8500) as hist_duration_8000to8500, sum(duration_8500to9000) as hist_duration_8500to9000, sum(duration_9000to9500) as hist_duration_9000to9500, sum(duration_9500to10000) as hist_duration_9500to10000, sum(duration_10000to10500) as hist_duration_10000to10500, sum(duration_10500to11000) as hist_duration_10500to11000, sum(duration_11000to11500) as hist_duration_11000to11500, sum(duration_11500to12000) as hist_duration_11500to12000, sum(duration_12000to12500) as hist_duration_12000to12500, sum(duration_12500to13000) as hist_duration_12500to13000, sum(duration_13000to13500) as hist_duration_13000to13500, sum(duration_13500to14000) as hist_duration_13500to14000, sum(duration_14000to14500) as hist_duration_14000to14500, sum(duration_14500to15000) as hist_duration_14500to15000, sum(duration_15000to15500) as hist_duration_15000to15500, sum(duration_15500to16000) as hist_duration_15500to16000, sum(duration_16000to16500) as hist_duration_16000to16500, sum(duration_16500to17000) as hist_duration_16500to17000, sum(duration_17000to17500) as hist_duration_17000to17500, sum(duration_17500to18000) as hist_duration_17500to18000, sum(duration_18000to18500) as hist_duration_18000to18500, sum(duration_18500to19000) as hist_duration_18500to19000, sum(duration_19000to19500) as hist_duration_19000to19500, sum(duration_19500to20000) as hist_duration_19500to20000, sum(duration_overflow) as hist_duration_overflow by _time, index, event
How do I convert the output of a table from stats command that looks like this: TIME VALUE METRIC time1 a 100 time1 b 200 time2 a ... See more...
How do I convert the output of a table from stats command that looks like this: TIME VALUE METRIC time1 a 100 time1 b 200 time2 a 50 time2 b 90 To this? TIME a b time1 100 200 time2 50 90
In my data I have rows such as this: {"calls":[{"call":"a","ts":"1","context":{"cached":"false"}},{"call":"b","ts":"2","context":{"cached":"true"}},{"call":"c","ts":"3","context":{"cached":"true"... See more...
In my data I have rows such as this: {"calls":[{"call":"a","ts":"1","context":{"cached":"false"}},{"call":"b","ts":"2","context":{"cached":"true"}},{"call":"c","ts":"3","context":{"cached":"true"}},{"call":"d","ts":"4","context":{"cached":"true"}}]} I want to find the rows which happened at ts <= 3 and see what % of them were are cached or not I have the query: index=* | stats count | eval cutoffts=3 | eval calls="{\"calls\":[{\"call\":\"a\",\"ts\":\"1\",\"context\":{\"cached\":\"false\"}},{\"call\":\"b\",\"ts\":\"2\",\"context\":{\"cached\":\"true\"}},{\"call\":\"c\",\"ts\":\"3\",\"context\":{\"cached\":\"true\"}},{\"call\":\"d\",\"ts\":\"4\",\"context\":{\"cached\":\"true\"}}]}" | eval callsarr=spath(calls,"calls{}") | eval callsts=spath(calls, "calls{}.ts") | eval callscachedarr=spath(calls, "calls{}.context.cached") | eval callscachedarrtrue=mvcount(mvfilter(callscachedarr="true")) | eval callscachedarrfalse=mvcount(mvfilter(callscachedarr="false")) | fillnull value=0 callscachedarrtrue callscachedarrfalse | eval cachedprecentage=callscachedarrtrue/(callscachedarrtrue+callscachedarrfalse)| table calls callsarr callsts callscachedarr callscachedarrtrue callscachedarrfalse cachedprecentage Unfortunately, I'm unable to filter the array to only the elements that had ts <=3.... so i end up with 3/4 = .75 instead of 2/3=.66
Lets say I have a log line that contains of a JSON field with this content: { "breakdown": { "a": [ { "t1": 100, "t2": 0 }, ... See more...
Lets say I have a log line that contains of a JSON field with this content: { "breakdown": { "a": [ { "t1": 100, "t2": 0 }, { "t1": 0, "t2": 0 } ], "b": [ { "t1": 1, "t2": 0 }, { "t1": 1, "t2": 0 } ], "c": [ { "t1": 1, "t2": 2 } ], "d": [ { "t1": 5, "t2": 1 } ] } } I want to Splunk this and convert the results into something like this: component count p50_t1 p50_t2 min_t1 max_t1 min_t2 max_t2 a 2 50 0 0 100 0 0 b 2 1 0 0 1 0 0 c 1 1 2 1 1 2 2 d 1 5 1 5 5 1 1 What's the Splunk query to do such transformation?
my intention is to copy events out of splunk into some other store. I would like to periodically run a query and copy the splunk data somewhere else. certain cases the splunk instance is down /... See more...
my intention is to copy events out of splunk into some other store. I would like to periodically run a query and copy the splunk data somewhere else. certain cases the splunk instance is down / times out queries / events show up later then indexing time... usually i could have gotten let's say every hour results and appended to the exported dataset the results. but I do want to upsert. in the upsert key I want to use the starttime anyways seems like starttime / endtime are very special parameters which cannot be used in the table being created
yes I was thinking of the solution you proposed, but I wish the bin function min / max arguments somehow could be used to achieve something like it...
I figure I can do things like this: index=* ..... | **eval runtimewithmax=case(runTime > 60, 61, true(), runTime)* | bucket _time span=1h | bin span=20 runtimewithmax | eval epoachtime=_time | st... See more...
I figure I can do things like this: index=* ..... | **eval runtimewithmax=case(runTime > 60, 61, true(), runTime)* | bucket _time span=1h | bin span=20 runtimewithmax | eval epoachtime=_time | stats count as eventcount by epoachtime, context, sourcetype, gdpr, index, path, runtimewithmax but I was hoping there is a better way.
Hello I want to use bin to categorize my runtimes into specific buckets. lets' say I want to show runtime and bucketize it every hour into buckets 0-20, 20-40, 40-60, 60 - maxtime. How do I do this... See more...
Hello I want to use bin to categorize my runtimes into specific buckets. lets' say I want to show runtime and bucketize it every hour into buckets 0-20, 20-40, 40-60, 60 - maxtime. How do I do this? Currently my query is like this: index=* ..... | bucket _time span=1h | bin span=20 end=200 runTime | eval epoachtime=_time | stats count by epoachtime, runTime | makecontinuous runTime | fillnull count and I get the following result : epoachtime runTime count 1532620800 0-20 2263 1532624400 0-20 3097 1532628000 0-20 2249 1532617200 0-20 45 1532631600 0-20 1615 1532631600 20-40 3 1532631600 40-60 1 60-80 0 80-100 0 100-120 0 120-140 0 140-160 0 160-180 0 180-200 0 200-220 0 1532620800 220-240 1 1532631600 240-260 2 1532620800 260-280 1 1532631600 260-280 1
I seem to be throttled when executing a lot of splunk queries. my jobs get queued up. Is there anything I can do to gain insights into why a particular user is being throttled?
I would like to download all the jobs that are being executed currently / if possible in past. Something like the Jobs (en-US/app/launcher/job_management#) into a CSV. Is there a splunk query I... See more...
I would like to download all the jobs that are being executed currently / if possible in past. Something like the Jobs (en-US/app/launcher/job_management#) into a CSV. Is there a splunk query I can execute for this? how can I do this?