All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank u Wish there was a way to make it in to a text just for labeling data
The answer to the question of fields vs table has probably changed over time - the Splunk optimiser will sometimes optimise a table statement to a fields statement,. However as a 'rule', it's worth ... See more...
The answer to the question of fields vs table has probably changed over time - the Splunk optimiser will sometimes optimise a table statement to a fields statement,. However as a 'rule', it's worth familiarising yourself with the command types https://docs.splunk.com/Documentation/Splunk/9.1.0/SearchReference/Commandsbytype#Streaming_commands and in a clustered environment, where you have one or more indexers and a search head that searches those indexers, the type of command you want to have as much of as possible BEFORE any other types are the Distributable Streaming commands You will see that fields is one of these, so when you use the fields statement, the operation of this command runs on the indexer, so will keep the parallelisation of multiple indexers. If you use the table command, you will see that this is a transforming command. Transforming commands cannot run on the indexers, so as soon as you use a transforming command in your search pipeline, all the data from all the indexers will have to be sent to the search head, where the pipeline will continue. See what runs where in this table https://docs.splunk.com/Documentation/Splunk/9.1.0/Search/Typesofcommands#Processing_attributes So, take advantage of this ability to keep the data at the indexers for as long as possible, as data at the search head will never go back to the indexers.  
It looks like that your intention is to capture raw events with "Event Type" and "Event ID" in them.  It would have been so much easier if you just describe the actual goal. You are correct that whe... See more...
It looks like that your intention is to capture raw events with "Event Type" and "Event ID" in them.  It would have been so much easier if you just describe the actual goal. You are correct that when you use list command, the resultant field doesn't have newline "\n" in it.  It is simply a multivalued field that Splunk's Statistics tab presents in multiple lines. I see two different approaches to this problem.  But before that, let me comment that you should approach your developer or aggregator, whoever made these logs into multiple events, and beg, harass, or intimidate them to combine these into a single event for Splunk.  It will not only be better for Splunk, but also for people who may read the log files manually. The most straightforward approach will be to not bother with regex or "\n". index=xxx | reverse | stats list(_raw) as raw by _time | eval Events = mvappend(mvfind(raw, "Event Type:"), mvfind(raw, "Event End:")) Note "Events" here is also multi-valued.  In my opinion, multivalued fields are more useful subsequently.  But if you really want them to be single valued with newline, just insert newline as exemplified in the next method. If you really, really must go with "\n", just insert it. index=xxx | reverse | stats list(_raw) as raw by _time | eval raw = mvjoin(raw, " ") | rex field=raw "(?<Events>(Event Type.*)((\n.*)?)+Event ID: \d+)"  
If you have console access to your SH, you might do it with (assuming that you're looking for 1.2.3.4 and your Splunk is installed in default place): find /opt/splunk -type f -path \*/lookups/\* -pr... See more...
If you have console access to your SH, you might do it with (assuming that you're looking for 1.2.3.4 and your Splunk is installed in default place): find /opt/splunk -type f -path \*/lookups/\* -print0 | xargs -0 grep '1\.2\.3\.4'  
How do I change the colors of the destination nodes in the network diagram viz app especially if they are not present in the source column? For example, if I try | eval color=case(ip_dst="some_ip", "... See more...
How do I change the colors of the destination nodes in the network diagram viz app especially if they are not present in the source column? For example, if I try | eval color=case(ip_dst="some_ip", "blue").....nothing happens.
@gcusello Sure, Here is the sample Events, which are all of single line events. index=xxx | reverse | stats list(_raw) as raw by _time | rex field=raw "(?<Events>(Event Type.*)((\n.*)?)+Event ID... See more...
@gcusello Sure, Here is the sample Events, which are all of single line events. index=xxx | reverse | stats list(_raw) as raw by _time | rex field=raw "(?<Events>(Event Type.*)((\n.*)?)+Event ID: \d+)" Events: 2023-08-20 22:10:10.879 Date: 20/08/2023 2023-08-20 22:10:10.879 User: DILE\Administrator 2023-08-20 22:10:10.879 Event Type: Information 2023-08-20 22:10:10.879 Event Source: AdsmClientService 2023-08-20 22:10:10.879 Event Category: None 2023-08-20 22:10:10.879 Event ID: 4101 2023-08-20 22:10:10.879 Computer: MIKEDILE
Hi @Thulasinathan_M, yes I supposed this, for this reason I hinted to do this! Could you share some sample of your logs? Ciao. Giuseppe
Thanks @gcusello!! But those are Single Line Events, so I can't perform REX before stats.
Hi @Newbie_punk, SPL (Splunk Programming Language) isn't a procedural language, so you havent a construct like if then else. But you can assign a value to a field  based on the condition you define... See more...
Hi @Newbie_punk, SPL (Splunk Programming Language) isn't a procedural language, so you havent a construct like if then else. But you can assign a value to a field  based on the condition you defined, e.g. if the same field has different name (e.g. metricA and metricB), you can use: index=aData OR index=bData | eval metric=coalesce(metricA,metricB) | table metric or use the if condition in the eval command index=aData OR index=bData | eval metric=if(index=indexA,metricA,metricB) | table metric Adapt ths approach to your condition. Ciao. Giuseppe
Hi @Thulasinathan_M, did you tried to invert the two commands? index=xxx | bin span=1h _time | rex "(?<Events>(\<Interested.*)((\n.*)?)+\<Ends Here\>)" | stats values(Events) AS Events BY _time I... See more...
Hi @Thulasinathan_M, did you tried to invert the two commands? index=xxx | bin span=1h _time | rex "(?<Events>(\<Interested.*)((\n.*)?)+\<Ends Here\>)" | stats values(Events) AS Events BY _time In addition, when you use _time as grouping key, usa always a bin command to group _time values or use timechart command, otherwise you'll have too many results. Ciao. Giuseppe
Hi @mofonguero, as I said, disable any personal firewall you have. Then see at "C:\Program Files\Splunk\var\log\splunk" if there's some log file about the installation. Ciao. Giuseppe
Hi Splunk Experts, I'm trying to list all the events on same timestamp and trying to capture only the required lines. But I'm not getting the expected results, seems like there is no "\n" in the ag... See more...
Hi Splunk Experts, I'm trying to list all the events on same timestamp and trying to capture only the required lines. But I'm not getting the expected results, seems like there is no "\n" in the aggregated event eventhough it breaks into new lines. Kindly shred some lights. Thanks in advance!!   I've events something like below, after aggregating them by _time:   Line1 blablabla Line2 blablabla <Interested line1> <Interested line2> <Interested line3> <Ends Here> Unwanted Line blablabla   Query Using:   index=xxx | reverse | stats list(_raw) as raw by _time | rex field=raw "(?<Events>(\<Interested.*)((\n.*)?)+\<Ends Here\>)"   Result for the Above query:   <Interested line1>    
Hi all, I created a lookup 6 months ago and now i have hundreds of lookup and i forgot what was it's name. I am looking for an IP address in which lookup it is but i couldn't find a way to do this. ... See more...
Hi all, I created a lookup 6 months ago and now i have hundreds of lookup and i forgot what was it's name. I am looking for an IP address in which lookup it is but i couldn't find a way to do this. I want to find out which lookup an IP address is in. Any help would be appreciated!
we do have access to all the logs, we have PowerShell , sysmon and linux ... we need to know is any user is uploading file through PowerShell or sysmon or any data source that usually the SOC can mo... See more...
we do have access to all the logs, we have PowerShell , sysmon and linux ... we need to know is any user is uploading file through PowerShell or sysmon or any data source that usually the SOC can monitor. we need to create a dashboard that shows any files activity @ITWhisperer thank you in advance 
index=winsec sourcetype=XmlWinEventLog EventCode=4743 NOT SubjectUserName="Win_Dir" | bin _time span=5m | stats values(EventCode) as EventCode, values(signature) as EventCodeDescription, values(Targe... See more...
index=winsec sourcetype=XmlWinEventLog EventCode=4743 NOT SubjectUserName="Win_Dir" | bin _time span=5m | stats values(EventCode) as EventCode, values(signature) as EventCodeDescription, values(TargetUserName) as Computer_user_deleted, values(TargetDomainName) as User_Domain dc(TargetUserName) as computeruser_count by _time SubjectUserName |rename SubjectUserName as Deleted_by_User | where computeruser_count > 10 | append [search index=winsec sourcetype=XmlWinEventLog EventCode=4726 NOT (SubjectUserName = "EC_Okta") | bin _time span=5m | stats values(EventCode) as EventCode, values(signature) as EventCodeDescription, values(object) as User_account_deleted , dc(object) as User_account_deleted_count by _time, SubjectUserName | rename SubjectUserName as src_user | where User_account_deleted_count > 10] | append [search index=winsec sourcetype=XmlWinEventLog EventCode=4725 NOT (SubjectUserName = "EC_Okta" OR SubjectUserName = "Win_Dir") | bin _time span=5m | stats values(EventCode) as EventCode, values(signature) as EventCodeDescription, values(TargetUserName) as disabled_account, values(TargetDomainName) as User_Domain dc(TargetUserName) as disabledaccount_count by _time SubjectUserName | rename SubjectUserName as src_user | where disabledaccount_count > 10]
..
Describing problems in generic terms is not always helpful as it just leads to more questions about what you are trying to do and with what. For example, one way of interpreting what you have said c... See more...
Describing problems in generic terms is not always helpful as it just leads to more questions about what you are trying to do and with what. For example, one way of interpreting what you have said could be resolved like this <search indexA> | appendpipe [|stats count as _count | where _count = 0 | search indexB]
Hello Giuseppe As far as I know I only have basic configurations in my firewall and anti-virus, however if there are any specific settings that you might know please let me know. I am not blocking a... See more...
Hello Giuseppe As far as I know I only have basic configurations in my firewall and anti-virus, however if there are any specific settings that you might know please let me know. I am not blocking any ports that could prevent me from downloading it either. It starts downloading but then it stops prematurely and rolls back. I tried downloading it in my desktop, then used a VM with a Linux OS and it didnt work either. Yes I am trying to download the latest version and tried different versions too. It seems like this is not for me
The best solution will depend on some other characteristics of the two datasets, and what exactly you plan to do with the surviving data.  A generic approach, however, is to use exactly "OR".  The id... See more...
The best solution will depend on some other characteristics of the two datasets, and what exactly you plan to do with the surviving data.  A generic approach, however, is to use exactly "OR".  The idea is to retrieve all data, then retain data from one of indices.  Suppose you REALLY want to present all raw data (instead of using stats for presentation), you can do index IN (aData, bData) <other criteria> | eventstats values(index) as indices | where index = mvindex(indices, 0)