All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Abass42 ,.. tried with rex and its working good.  | makeresults | eval _raw="10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130,DuBlIn_,11.11.119.111,port_80=True,port_443=Tr... See more...
Hi @Abass42 ,.. tried with rex and its working good.  | makeresults | eval _raw="10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130,DuBlIn_,11.11.119.111,port_80=True,port_443=True,port_21=False,port_22=True,port_25=False,port_53=False,port_554=False,port_139=False,port_445=False,port_123=False,port_3389=False" | extract | rex max_match=5 field=_raw "port\_(?P<open_ports>\d+)\=True" | mvexpand open_ports | table _time, destination, gpss_src_ip, open_ports
Nice document @_JP ... thanks for sharing.    the trouble with newbies is that, they want one person to hold their hands and walk with them together.(literally).  If we say "I can only show you th... See more...
Nice document @_JP ... thanks for sharing.    the trouble with newbies is that, they want one person to hold their hands and walk with them together.(literally).  If we say "I can only show you the door, you only should decide and walk thru it(the great Morpheus)", still they want us to walk with them (holding their hands). !!!
Just in case, if anyone looking for a proof of @richgalloway 's line..  "Splunk only supports HTTP 1.1" (and no http 2 yet(Oct 2023): Pls goto server conf file and search for "http 1" (and/or "htt... See more...
Just in case, if anyone looking for a proof of @richgalloway 's line..  "Splunk only supports HTTP 1.1" (and no http 2 yet(Oct 2023): Pls goto server conf file and search for "http 1" (and/or "http 2") https://docs.splunk.com/Documentation/Splunk/9.1.1/admin/serverconf  
This isn't a question, rather just a place to drop a PDF I put together that I titled "Bare Bones Splunk"   I've seen a lot of people try and get started with Splunk, but then get stuck right after... See more...
This isn't a question, rather just a place to drop a PDF I put together that I titled "Bare Bones Splunk"   I've seen a lot of people try and get started with Splunk, but then get stuck right after getting Splunk Enterprise installed on their local machine. It can be daunting to log into Splunk for the first time and know what the heck you should do.  A person can get through the install to the What Happens Next page, and be pretty overwhelmed with what to do next: Learn SPL and search?  What should they search?  How should they start getting their data in?  What sort of data should I start getting in?  What dashboard should I build? They've started...but need that ah-ha example to see how this tool will fit into their existing environment and workflow. The attached Bare_Bones_Splunk.pdf file guides the reader from the point of install to using the data already being indexed in index=_internal to replicate a few common use cases of Splunk: Monitor a web server Monitor an application server Monitor security incidents The examples are really simple, and the resulting dashboard created in the tutorial is a poor example of something your boss might want (or not...how observant is your boss - do they just want a few graphs with nice colors?).  But, this will give someone a really quick intro to Splunk without having to do anything other than install (and then maybe they will be ready to tackle a broader introduction, like the Search Tutorial)
Yes, field1, field2, x,y,z,a,b,c are all from the same set of events and are all non-null, and in general, we might have other groupbys besides xyz and abc -- in one of my frequent use cases I have t... See more...
Yes, field1, field2, x,y,z,a,b,c are all from the same set of events and are all non-null, and in general, we might have other groupbys besides xyz and abc -- in one of my frequent use cases I have three: x, xy, and xyz, for instance (say, when I want to calculate statistics with different levels of granularity -- e.g. percentile response times by hour, or hour-IP, or hour-IP-server ).  I guess the question is rather more of a data-engineering problem rather than an analytics one: regardless of if we want two tables or one, how do we generate the data in a fast way? As it happens, doing two or more separate searches is significantly slower than, say, running one and doing some fancy stats magic on it, even  if it's more complicated.  Also just out of curiosity, what do we mean by normalized tables here?   
You didn't run addinfo in the chain search as suggested.  Of course that will cause error because that's the same as  division by zero.
As with most questions about data analytics, you need to explain data characteristics.  Do fields x y z and a b c appear in the same events?  What about field1 and field2?  If they are totally diagon... See more...
As with most questions about data analytics, you need to explain data characteristics.  Do fields x y z and a b c appear in the same events?  What about field1 and field2?  If they are totally diagonal, you will get a diagonal matrix, whether you group by x y z or by x.",".y.",".z. avg(field1) perc95(field2) x y z a b c avg_x1y1z1   x1 y1 z1       avg_x2y2z2   x2 y2 z2         perc95_a1b1c1       a1 b1 c1   perc95_a2b2c2       a2 b2 c2 Such data would be best presented as two normalized tables.  What is the point of combining them?
Hi @smanojkumar, Here's a sample dashboard that has a multi-select with prefix/suffix values, and a second token that you can append to drill-downs. Save the dashboard as "send_multi_value_token_dr... See more...
Hi @smanojkumar, Here's a sample dashboard that has a multi-select with prefix/suffix values, and a second token that you can append to drill-downs. Save the dashboard as "send_multi_value_token_drilldown" <form version="1.1"> <label>Send Multi Value Token Drilldown</label> <fieldset submitButton="true" autoRun="false"> <input type="multiselect" token="multi" searchWhenChanged="true"> <label>Multiselect</label> <choice value="val1">key1</choice> <choice value="val2">key2</choice> <choice value="val3">key3</choice> <choice value="val4">key4</choice> <choice value="val5">key5</choice> <default>val1</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>val1</initialValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> <change> <eval token="drilldown">replace('form.multi',"([^,]+),?","&amp;form.multi=$1")</eval> </change> </input> </fieldset> <row> <panel> <title>Token Values</title> <html> <h1>Token: $multi$</h1> <h1>Drildown Token: $drilldown$</h1> <a href="send_multi_value_token_drilldown?$drilldown$" target="_BLANK">Drilldown</a> </html> </panel> </row> </form>   The trick here is in the <change> event on the multiselect. Each time you change the value of the dropdown, it creates a new token with the $form.multi$ token rather than just using $multi$: <eval token="drilldown">replace('form.multi',"([^,]+),?","&amp;form.multi=$1")</eval>   Try changing the values and then click on the "drilldown" link. You will see the same dashboard load in a new window with the same values pre-selected.   On your dashboard, just add the $drilldown$ token to your dashboard, and it will propagate the values. Cheers, Daniel
Hi @Abass42,   You can achieve this using the foreach command. | makeresults | eval _raw="10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130,DuBlIn_,11.11.119.111,port_80=True,por... See more...
Hi @Abass42,   You can achieve this using the foreach command. | makeresults | eval _raw="10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130,DuBlIn_,11.11.119.111,port_80=True,port_443=True,port_21=False,port_22=True,port_25=False,port_53=False,port_554=False,port_139=False,port_445=False,port_123=False,port_3389=False" | extract ``` Above is to generate the test data ``` ``` Iterate through each port_xxx field to pick out the open ones ``` | foreach port_* [| eval open_ports=if(<<FIELD>>=="True", mvappend(open_ports, "<<MATCHSTR>>"), open_ports)] | mvexpand open_ports | table _time, destination, gpss_src_ip, open_ports   We use foreach to pick out all the fields that start with port_ and test to see if they are true. If they are, we add the number part of the field name (<<MATCHSTR>>) to a new multivalue field. Then we continue with your mvexpand and table to show the results. The results look like this:   The Splunk Docs page for ForEach explains the use of <<FIELD> and <<MATCHSTR>>   Cheers, Daniel    
I have a user that requested me to look into some of his reports. He wanted the permission of report 2 to match with report 1. Both are owned by two different people, but two people with similar role... See more...
I have a user that requested me to look into some of his reports. He wanted the permission of report 2 to match with report 1. Both are owned by two different people, but two people with similar roles and access.   After we tweaked the settings for the report, being shared in the app, having read access by all, and write permissions to those with the appropriate roles, they are still having issues viewing and editing.    The owner of report 1 is the owner/creator of the report. The report runs as owner, and is shared globally. He doesn't have permissions to edit the actual alert.  He created the report initially, how come he cant edit it. I even cloned it and reassigned ownership, to no avail.  Report 1  runs as owner, while report 2 has the option to run as owner or as the user. How come one report has that option while the other one is locked to running as owner? As far as user two goes, his roles include permissions to the used indexes, as well as access to the app, default search app, and he has even more roles and permissions than user 1. Yet, he receives an error when trying to view the link that splunk sends out that has the attached report.  My question is, is there anywhere else I should be looking at in order to find permission discrepancies. From everything ive seen, both users have access to the required indexes, have pretty much soft-admin on splunk, and i assume they have viewed these in the past. From roles to users to capabilities, they have everything in order, or at least it seems. Is there something I should check in the configs?    Thanks for any guidance. 
I often run into a case where I find I need to take the same dataset and compute aggregate statistics on different group-by sets, for instance if you want the output of this:     index=example | st... See more...
I often run into a case where I find I need to take the same dataset and compute aggregate statistics on different group-by sets, for instance if you want the output of this:     index=example | stats avg(field1) by x,y,z | append [ index=example | stats perc95(field2) by a,b,c ]   I am using the case n=2 groupbys for convenience. In the general case there are N groupbys, and arbitrary stats functions... what is the best way to optimize this kind of query, without using append (which runs into subsearch limits)? Some of the patterns I can think of are below. One way is to use appendpipe.    index=example | appendpipe [ | stats avg(field1) by x,y,z ] | appendpipe [ | stats perc95(field2) by a,b,c ]   Unfortunately this seems kind of slow, especially once you start having to add more subsearches and preserving and passing  a large number of non-transformed events throughout the search. Another way is to use eventstats to preserve the events data, finishing it off with a final stats.   index=example | eventstats avg(field1) as avg_field1 by x,y,z | stats first(avg_field1) as avg_field1, perc95(field2) by a,b,c   Unfortunately this is not much faster. I think there is another way using streamstats in place of eventstats, but I still haven't figured out how to retrieve the last event without just invoking eventstats last() or relying on an expensive sort.   Another way I've tried is intentionally duplicating your data using mvexpand which has the best performance by far.    index=example ```Duplicate all the data``` | eval key="1,2" | makemv delim="," key | mvexpand key ```Set groupby = concatenation of groupby field values``` | eval groupby=case(key=1,x.",".y.",".z, key2=a.",".b.",".c, true(), null()) | stats avg(field1), perc95(field2) by groupby   Are there any other patterns that are easier/faster? I'm curious as to how Splunk processes things under the hood, I know something called "map-reduce" is part of it but would be curious to know if anyone knows how to optimize this computation and why it's optimal in a theoretical sense. 
Your second stats will not work because after the first stats you only have the User ID and count fields. The info_max_time and info_min_time fields no longer exist.
Thank you very much!
Hi, I am trying to create a custom app using add-on builder.  In request I am looking to use global account details. but its throwing an error.  Not sure what I am missing here. Anyone know abo... See more...
Hi, I am trying to create a custom app using add-on builder.  In request I am looking to use global account details. but its throwing an error.  Not sure what I am missing here. Anyone know about this issue ? I am using latest version of Add-on builder. Reference  https://docs.splunk.com/Documentation/AddonBuilder/4.1.3/UserGuide/ConfigureDataCollectionAdvanced Thanks
I was asked to create a query that will allow the user to see only the open ports. An example log looks something like this:     10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130... See more...
I was asked to create a query that will allow the user to see only the open ports. An example log looks something like this:     10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130,DuBlIn_,11.11.119.111,port_80=True,port_443=True,port_21=False,port_22=True,port_25=False,port_53=False,port_554=False,port_139=False,port_445=False,port_123=False,port_3389=False     it looks easy enough, I want to table port_*=True.   I want destination, src_ip, and the open ports.   I asked our equivalent of Chat GPT about it, and I got this.      index=gpss sourcetype=acl "SXXXXXXX" destination="11.11.111.11" | eval open_ports = case( port_123=="True", "123", port_139=="True", "139", port_21=="True", "21", port_22=="True", "22", port_25=="True", "25", port_3389=="True", "3389", port_443=="True", "443", port_445=="True", "445", port_53=="True", "53", port_554=="True", "554", port_80=="True", "80", true(), null() ) | where open_ports!=null() | mvexpand open_ports | table _time, destination, gpss_src_ip, open_ports     But the open_ports!=null() wasnt allowed.  I get a  Error in 'where' command: Type checking failed. The '!=' operator received different types.   During testing, I have a baseline event, an event with three open Ports, but that search I ran only outputs the first one in the list. It hits port 22 first, since thats the first on in the case statement that is true.  My main question is, How do I successfully tell splunk to only grab the open ports that are True? Can i even do a wildcard somewhere, and request to pull port_* WHERE True   Thank you for any help  
@michaelissartel I haven't tested using an env variable as the field value before. If you do end up testing can you put your reply here for others?
I have a multiselect that does not interact with my Trellis chart. I would say; it's not defined in my base search but not sure how to identify the issue and how to fix? BASE Search: | eval Pat=sp... See more...
I have a multiselect that does not interact with my Trellis chart. I would say; it's not defined in my base search but not sure how to identify the issue and how to fix? BASE Search: | eval Pat=spath(json, "Info.Pat.Time") | eval Con=spath(json, "Info.Con.Time") | eval Cov=spath(json, "Info.Cov.Time") | eval Category = RED | table _time, Pat, Con, Cov, Category  Mulit-Select: | eval SysTime = Category + ":" + _time | fields - Category | untable SysTime Reason CurationValue | eval Category = mvindex(split(SysTime, ":"), 0) | eval _time = mvindex(split(SysTime, ":"), 1) | fields - SysTime | table Reason | dedup Reason Chart: | search Category $t_category$ Reason $t_reason$ | timechart span=1h avg(Pat) as Pat, avg(Con) as Con, avg(Cov) as Cov  
Thanks @_JP.  My goal was to account for servers in two data centers with identical names except the 2nd character which designates the datacenter and avoid having to maintain separate host files for... See more...
Thanks @_JP.  My goal was to account for servers in two data centers with identical names except the 2nd character which designates the datacenter and avoid having to maintain separate host files for each data center. I know the trailing wildcard works, I just wasn't sure if adding a wildcard at the beginning or in the middle would work. 
I selected this answer as the solution. After coming back to this a few days later, it seems to be reporting what I was looking for. Not sure if there was some odd caching going on when I was testing... See more...
I selected this answer as the solution. After coming back to this a few days later, it seems to be reporting what I was looking for. Not sure if there was some odd caching going on when I was testing over and over, but this at least gets me close to what I was looking for.
Sorry for the late reply.  I tried your command and it's receiving an error.