All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @taijusoup64 , let me understand: you want to calculate bytes only when:  id.orig_h="frontend" AND id.resp_h="frontend", is this correct? in this case add the condition to the eval statement: i... See more...
Hi @taijusoup64 , let me understand: you want to calculate bytes only when:  id.orig_h="frontend" AND id.resp_h="frontend", is this correct? in this case add the condition to the eval statement: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h="front end",resp_bytes,0))+(if(id.orig_h="front end",orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) Ciao. Giuseppe why did you used all that parenthesis? Ciao. Giuseppe
Hi @gauravu_14 , in general, having a lookup containing the host to monitor list, you can use a search like this: | tstats count WHERE index=* BY host | append [ | inputlookup your_lookup.csv | eva... See more...
Hi @gauravu_14 , in general, having a lookup containing the host to monitor list, you can use a search like this: | tstats count WHERE index=* BY host | append [ | inputlookup your_lookup.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 if you are monitoring some clusters, you should have in the lookup the indication of the clusters, something like this: primary_host secondary_host host1 host1bis host2 host3 host3bis host4 and run a little different search:   | tstats count WHERE index=* BY host | lookup your_lookup.csv primary_host AS host OUTPUT secondary_host | lookup your_lookup.csv seondary_host AS host OUTPUT primary_host | append [ | inputlookup your_lookup.csv | rename primary_host AS host | eval count=0 | fields host count ] | append [ | inputlookup your_lookup.csv | rename secondary_host AS host | eval count=0 | fields host count ] | stats sum(count) AS total values(primary_host) AS primary_host values(secondary_host) AS secondary_host BY host | where total=0 AND NOT (primary_host=* secondary_host=*) About the indexes related to the not sending hosts, it's more difficoult because you don't have, in this search the information about the indexes, the only way is to store in the lookup also the information about the indexes usually used, in this case you can add this information in the stats commands: | tstats count WHERE index=* BY host | lookup your_lookup.csv primary_host AS host OUTPUT secondary_host indexes | lookup your_lookup.csv seondary_host AS host OUTPUT primary_host indexes | append [ | inputlookup your_lookup.csv | rename primary_host AS host | eval count=0 | fields host count ] | append [ | inputlookup your_lookup.csv | rename secondary_host AS host | eval count=0 | fields host count ] | stats sum(count) AS total values(primary_host) AS primary_host values(secondary_host) AS secondary_host values(indexes) AS indexes BY host | where total=0 AND NOT (primary_host=* secondary_host=*) Ciao. Giuseppe
Hi All We have DB agents and the SQL servers are still using TLS 1.1 and 1.0. Can this affect the DB metrics reporting to AppD.  Regards Fadil
Hi @mfonisso, what are the resources of your Splunk server? Splunk requires at least 12 CPUs and 12 GB RAM (more if you have ES or ITSI) and a disk with at least 800 IOPS. Ciao. Giuseppe
Hi @Rahul-Sri , this is another question and it's always better to open a new case, even if this is the followig step to your request, in this way you'll have surely faster and probably better answe... See more...
Hi @Rahul-Sri , this is another question and it's always better to open a new case, even if this is the followig step to your request, in this way you'll have surely faster and probably better answers. Anyway, the approach is to use eval not format command and round the number: | eval count=round(count/1000000,2)."M" please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated   
Hi @Ryan.Paredez , Thank yo for this. Actually I was having this concern for another account.  Regards Fadil
Hi, In the above query in my dashboard is displaying large numbers. I want to convert those to shorter number with million added to it. For example if the value shows 600,0000 then the result should ... See more...
Hi, In the above query in my dashboard is displaying large numbers. I want to convert those to shorter number with million added to it. For example if the value shows 600,0000 then the result should display 6mil. How I can achieve? I tried using--> | eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother=f count(status) by status|fieldformat count = count/1000000 But this does not work. Any help is appreciated.
My mistake - I neglected groupby. I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regula... See more...
My mistake - I neglected groupby. I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regularly because it's a common use case.)  So, here is a shot:   |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application | streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application | sort application _time | eval delta = if(Trans_max == Trans, delta, "-" . delta) | eval pct_delta = delta / Trans * 100 | fields - Trans_max   Here is my full simulation   | mstats max(_value) as Trans where index=_metrics metric_name = spl.mlog.bucket_metrics.* earliest=-8h@h latest=-4h@h by metric_name span=1h | rename metric_name as application ``` the above simulates |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application ``` | streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application | sort application _time | eval delta = if(Trans_max == Trans, delta, "-" . delta) | eval pct_delta = delta / Trans * 100 | fields - Trans_max   My output is _time application Trans delta pct_delta 2024-03-28 12:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 13:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 14:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 15:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 12:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.current_total 215.000000 0.000000 0.000000 2024-03-28 13:00 spl.mlog.bucket_metrics.current_total 215.000000 0.000000 0.000000 2024-03-28 14:00 spl.mlog.bucket_metrics.current_total 214.000000 -1.000000 -0.4672897 2024-03-28 15:00 spl.mlog.bucket_metrics.current_total 214.000000 0.000000 0.000000 2024-03-28 12:00 spl.mlog.bucket_metrics.frozen 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.frozen 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.frozen 1.000000 1.000000 100.0000 2024-03-28 15:00 spl.mlog.bucket_metrics.frozen 0.000000 -1.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.total_removed 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.total_removed 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.total_removed 1.000000 1.000000 100.0000 2024-03-28 15:00 spl.mlog.bucket_metrics.total_removed 0.000000 -1.000000   Obviously my results have lots of nulls because lots of my "Trans" values are zero.  But you get the idea.
Reached out to their support team at education@splunk.com and they resolved it for me.
@yuanliu There was a misunderstanding from my end about the query. Your suggested query works great. Thanks again 
Hi @yuanliu , Thanks a lot, your query works
Hi We are looking a way to integrate Checkmarx with Splunk what will be the best way?
To clarify about your query - 'given that list is an array, selecting only the first element for matching may not be what the use case demands' I understand that it sounds weird  , but our use ca... See more...
To clarify about your query - 'given that list is an array, selecting only the first element for matching may not be what the use case demands' I understand that it sounds weird  , but our use case is about selecting events where the first object in an array/list should have  type == "code" What I was trying to say is: Do you select this one, when type == "code" is the second element? { list: [ {"name": "Hello", "type": "document"}, {"name": "Hello", "type": "code"} ] } If you want to select this kind of events as well as the other kind, only the second search will work.  If you want to select an event only if its first element contains type == "code", use the first search. the first query, as you have mentioned it 'Select events in which list{}.name has one unique value "Hello" ' is there a way select events in which all the objects should contain name == "Hello" instead of just one unique value? This gets confusing.  My rephrasing "has one unique value 'Hello'" is based on your OP statement In all the items in the list array should have "name": "Hello" Did I misunderstand this? Anyway, my searches do retrieve Event 1 as expected.  Is there any problem with them?
hi @yuanliu , when i run the below query, trans values are fine, but getting negative values  and empty row for the delta_Trans and pct_delta_Trans fields values are not correct. _time applicat... See more...
hi @yuanliu , when i run the below query, trans values are fine, but getting negative values  and empty row for the delta_Trans and pct_delta_Trans fields values are not correct. _time application Trans delta_Trans pct_delta_Trans 2022-01-22 02:00 app1 3456.000000     2022-01-22 02:00 app2 5632.000000 -1839.000000 -5438.786543 2022-01-22 02:00 app3 5643.000000 36758.000000 99.76435678 2022-01-22 02:00 app4 16543.00000 -8796.908678 -8607.065438
Hi @yuanliu , Thanks for the response the first query, as you have mentioned it 'Select events in which list{}.name has one unique value "Hello" ' is there a way select events in which all the obje... See more...
Hi @yuanliu , Thanks for the response the first query, as you have mentioned it 'Select events in which list{}.name has one unique value "Hello" ' is there a way select events in which all the objects should contain name == "Hello" instead of just one unique value? To clarify about your query - 'given that list is an array, selecting only the first element for matching may not be what the use case demands' I understand that it sounds weird  , but our use case is about selecting events where the first object in an array/list should have    type == "code"    
Something like this? |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application | delta Trans as delta_Trans | eval pct_de... See more...
Something like this? |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application | delta Trans as delta_Trans | eval pct_delta_Trans = delta_Trans / Trans * 100
I don't quite get why you want a sparse corner for Total_* but it is hackable   | appendpipe [ eval Total_A = null() ] | eval Total_B = if(isnull(Total_A), Total_B, null()) | eval Unixtime_... See more...
I don't quite get why you want a sparse corner for Total_* but it is hackable   | appendpipe [ eval Total_A = null() ] | eval Total_B = if(isnull(Total_A), Total_B, null()) | eval Unixtime_AB = if(isnull(Total_B), Unixtime_A, Unixtime_B) | fields Total_* Unixtime_AB   (Note this hack works for a small number of Unixtime_* but not particularly scalable.) Just in case you want a dense matrix, I'm offering an obvious result set: Total_AB Unixtime_AB 1 imaginary_unix_3 2 imaginary_unix_1 3 imaginary_unix_4 4 imaginary_unix_3 5 imaginary_unix_1 6 imaginary_unix_4 To get this, do   | appendpipe [ eval Total_A = null() ] | eval Total_AB = if(isnull(Total_A), Total_B, Total_A) | eval Unixtime_AB = if(isnull(Total_B), Unixtime_A, Unixtime_B) | fields - *_A *_B   Here is an emulation you can play with and compare with real data.   | makeresults format=csv data="Unixtime_A, Total_A, Unixtime_B, Total_B imaginary_unix_1, 1, imaginary_unix_3, 4 imaginary_unix_2, 2, imaginary_unix_1, 5 imaginary_unix_3, 3, imaginary_unix_4, 6" ``` data emulation above ```   Hope this helps.
I want to compare pervious hour data with present hour data and get the percentage using below query. |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, ap... See more...
I want to compare pervious hour data with present hour data and get the percentage using below query. |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application
From the Subject Title, what I mean is it will increase the row count and decrease the column count - that is my intention. After a series of mathematical computations, I ended up with the following... See more...
From the Subject Title, what I mean is it will increase the row count and decrease the column count - that is my intention. After a series of mathematical computations, I ended up with the following table: Unixtime_A Total_A Unixtime_B Total_B imaginary_unix_1 1 imaginary_unix_3 4 imaginary_unix_2 2 imaginary_unix_1 5 imaginary_unix_3 3 imaginary_unix_4 6 Notes: Unixtime_A may not equal Unixtime_B, but they are formatted the same that is snapped to the month with @mon (unixtime) Total_A and Total_B were the result of various conditional counts, so they need to be seperate fields   The desired table is: Unixtime_AB Total_A Total_B imaginary_unix_1 1   imaginary_unix_2 2   imaginary_unix_3 3   imaginary_unix_3   4 imaginary_unix_1   5 imaginary_unix_4   6 Which I can then use | fillnull and use a simple stats to sum both totals by Unixtime_AB. Like so:   | stats sum(Total_A), sum(Total_B) by Unixtime_AB     I'm not 100% sure if transpose, untable, or xyseries could do this - or if I was misusing them somehow.
Hi,  What are the options to integrate Appdynamics with zabbix or the other way around to send data from zabbix to AppDynamics Thanks Akhila