All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What do you mean split by host? Perhaps if you share what your events actually look like (anonymised of course), we might be able to figure out what it is you are trying to do.
ok, thank you for you reply. but if using mvexpand, the ip will splited by host. is there any work around to not split the host? if i split the ip for example like host   ip1  ip2  is there any q... See more...
ok, thank you for you reply. but if using mvexpand, the ip will splited by host. is there any work around to not split the host? if i split the ip for example like host   ip1  ip2  is there any query that can detect the internet facing by search across all the ip field
How can I extract all the data listed inside a dashboard using python SDK?
By group field, I assume you are referring to a multi-value field? If so, you could expand your events by the multi-value field so that each part can be evaluated separately | mvexpand IP
Can you find by looking through your logs any data which indicates the behaviour you are looking for?
  Hi,  I have created table with host and grouped IP address the host will have public and private IP address So my table look like this Host             IP                      id Host A       ... See more...
  Hi,  I have created table with host and grouped IP address the host will have public and private IP address So my table look like this Host             IP                      id Host A        10.1.1.1         21                       172.1.1.1        i have ip range to identify the public ip. i need to create another field which if the range is match mean the result will be yes if not no i have used this query for the field  | eval "internet facing"=case(cidrmatch(172.1.1.0/24" , IP) , "Yes" , 1=1, "No") but this eval only work on field which have 1 IP. in my group ip field, its not working. Please assist on this. Thank you
Got this figured out! The JS version sent the `body` part wrong: It is not supposed to be JSON encoded but HTTP query string encoded. The working version is here in GitHub: https://gist.github.com/w... See more...
Got this figured out! The JS version sent the `body` part wrong: It is not supposed to be JSON encoded but HTTP query string encoded. The working version is here in GitHub: https://gist.github.com/ww9rivers/dc3fd9ba8d2817b9fc986aa9457a2b61
Thank u Wish there was a way to make it in to a text just for labeling data
The answer to the question of fields vs table has probably changed over time - the Splunk optimiser will sometimes optimise a table statement to a fields statement,. However as a 'rule', it's worth ... See more...
The answer to the question of fields vs table has probably changed over time - the Splunk optimiser will sometimes optimise a table statement to a fields statement,. However as a 'rule', it's worth familiarising yourself with the command types https://docs.splunk.com/Documentation/Splunk/9.1.0/SearchReference/Commandsbytype#Streaming_commands and in a clustered environment, where you have one or more indexers and a search head that searches those indexers, the type of command you want to have as much of as possible BEFORE any other types are the Distributable Streaming commands You will see that fields is one of these, so when you use the fields statement, the operation of this command runs on the indexer, so will keep the parallelisation of multiple indexers. If you use the table command, you will see that this is a transforming command. Transforming commands cannot run on the indexers, so as soon as you use a transforming command in your search pipeline, all the data from all the indexers will have to be sent to the search head, where the pipeline will continue. See what runs where in this table https://docs.splunk.com/Documentation/Splunk/9.1.0/Search/Typesofcommands#Processing_attributes So, take advantage of this ability to keep the data at the indexers for as long as possible, as data at the search head will never go back to the indexers.  
It looks like that your intention is to capture raw events with "Event Type" and "Event ID" in them.  It would have been so much easier if you just describe the actual goal. You are correct that whe... See more...
It looks like that your intention is to capture raw events with "Event Type" and "Event ID" in them.  It would have been so much easier if you just describe the actual goal. You are correct that when you use list command, the resultant field doesn't have newline "\n" in it.  It is simply a multivalued field that Splunk's Statistics tab presents in multiple lines. I see two different approaches to this problem.  But before that, let me comment that you should approach your developer or aggregator, whoever made these logs into multiple events, and beg, harass, or intimidate them to combine these into a single event for Splunk.  It will not only be better for Splunk, but also for people who may read the log files manually. The most straightforward approach will be to not bother with regex or "\n". index=xxx | reverse | stats list(_raw) as raw by _time | eval Events = mvappend(mvfind(raw, "Event Type:"), mvfind(raw, "Event End:")) Note "Events" here is also multi-valued.  In my opinion, multivalued fields are more useful subsequently.  But if you really want them to be single valued with newline, just insert newline as exemplified in the next method. If you really, really must go with "\n", just insert it. index=xxx | reverse | stats list(_raw) as raw by _time | eval raw = mvjoin(raw, " ") | rex field=raw "(?<Events>(Event Type.*)((\n.*)?)+Event ID: \d+)"  
If you have console access to your SH, you might do it with (assuming that you're looking for 1.2.3.4 and your Splunk is installed in default place): find /opt/splunk -type f -path \*/lookups/\* -pr... See more...
If you have console access to your SH, you might do it with (assuming that you're looking for 1.2.3.4 and your Splunk is installed in default place): find /opt/splunk -type f -path \*/lookups/\* -print0 | xargs -0 grep '1\.2\.3\.4'  
How do I change the colors of the destination nodes in the network diagram viz app especially if they are not present in the source column? For example, if I try | eval color=case(ip_dst="some_ip", "... See more...
How do I change the colors of the destination nodes in the network diagram viz app especially if they are not present in the source column? For example, if I try | eval color=case(ip_dst="some_ip", "blue").....nothing happens.
@gcusello Sure, Here is the sample Events, which are all of single line events. index=xxx | reverse | stats list(_raw) as raw by _time | rex field=raw "(?<Events>(Event Type.*)((\n.*)?)+Event ID... See more...
@gcusello Sure, Here is the sample Events, which are all of single line events. index=xxx | reverse | stats list(_raw) as raw by _time | rex field=raw "(?<Events>(Event Type.*)((\n.*)?)+Event ID: \d+)" Events: 2023-08-20 22:10:10.879 Date: 20/08/2023 2023-08-20 22:10:10.879 User: DILE\Administrator 2023-08-20 22:10:10.879 Event Type: Information 2023-08-20 22:10:10.879 Event Source: AdsmClientService 2023-08-20 22:10:10.879 Event Category: None 2023-08-20 22:10:10.879 Event ID: 4101 2023-08-20 22:10:10.879 Computer: MIKEDILE
Hi @Thulasinathan_M, yes I supposed this, for this reason I hinted to do this! Could you share some sample of your logs? Ciao. Giuseppe
Thanks @gcusello!! But those are Single Line Events, so I can't perform REX before stats.
Hi @Newbie_punk, SPL (Splunk Programming Language) isn't a procedural language, so you havent a construct like if then else. But you can assign a value to a field  based on the condition you define... See more...
Hi @Newbie_punk, SPL (Splunk Programming Language) isn't a procedural language, so you havent a construct like if then else. But you can assign a value to a field  based on the condition you defined, e.g. if the same field has different name (e.g. metricA and metricB), you can use: index=aData OR index=bData | eval metric=coalesce(metricA,metricB) | table metric or use the if condition in the eval command index=aData OR index=bData | eval metric=if(index=indexA,metricA,metricB) | table metric Adapt ths approach to your condition. Ciao. Giuseppe
Hi @Thulasinathan_M, did you tried to invert the two commands? index=xxx | bin span=1h _time | rex "(?<Events>(\<Interested.*)((\n.*)?)+\<Ends Here\>)" | stats values(Events) AS Events BY _time I... See more...
Hi @Thulasinathan_M, did you tried to invert the two commands? index=xxx | bin span=1h _time | rex "(?<Events>(\<Interested.*)((\n.*)?)+\<Ends Here\>)" | stats values(Events) AS Events BY _time In addition, when you use _time as grouping key, usa always a bin command to group _time values or use timechart command, otherwise you'll have too many results. Ciao. Giuseppe
Hi @mofonguero, as I said, disable any personal firewall you have. Then see at "C:\Program Files\Splunk\var\log\splunk" if there's some log file about the installation. Ciao. Giuseppe
Hi Splunk Experts, I'm trying to list all the events on same timestamp and trying to capture only the required lines. But I'm not getting the expected results, seems like there is no "\n" in the ag... See more...
Hi Splunk Experts, I'm trying to list all the events on same timestamp and trying to capture only the required lines. But I'm not getting the expected results, seems like there is no "\n" in the aggregated event eventhough it breaks into new lines. Kindly shred some lights. Thanks in advance!!   I've events something like below, after aggregating them by _time:   Line1 blablabla Line2 blablabla <Interested line1> <Interested line2> <Interested line3> <Ends Here> Unwanted Line blablabla   Query Using:   index=xxx | reverse | stats list(_raw) as raw by _time | rex field=raw "(?<Events>(\<Interested.*)((\n.*)?)+\<Ends Here\>)"   Result for the Above query:   <Interested line1>    
Hi all, I created a lookup 6 months ago and now i have hundreds of lookup and i forgot what was it's name. I am looking for an IP address in which lookup it is but i couldn't find a way to do this. ... See more...
Hi all, I created a lookup 6 months ago and now i have hundreds of lookup and i forgot what was it's name. I am looking for an IP address in which lookup it is but i couldn't find a way to do this. I want to find out which lookup an IP address is in. Any help would be appreciated!