All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sarge338 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Another incredible answer!  These helped me a lot!
Incredible answer!
Note: botsv1 means absolutely nothing to most volunteers in this forum.  If there is something special about this dataset, you need to explain very clearly.  Also important: when you have a sample co... See more...
Note: botsv1 means absolutely nothing to most volunteers in this forum.  If there is something special about this dataset, you need to explain very clearly.  Also important: when you have a sample code that doesn't do what you wanted, you need to illustrate what it actually outputs, and explain why it doesn't meet your requirement if that's not painfully obvious.  Did your sample code give you desired result? Based on your sample code, I speculate that so-called URI is in the field src_ip?  Why do you use list, not values?  What is the use of list of count?  What's wrong with this simpler formula?   index=indexname |stats values(domain) as Domain count as total by src_ip | sort -total | head 10   Without SPL, can you explain/illustrate what data is like (anonymize as necessary), illustrate what the end result look like using illustrated data, and describe the logic between that data and your desired result?  This is the best way to get help with data analytics. i can speculate that you want to display individual count of domains by src_ip, too.  If so, designing a proper visual vocabulary is a lot better.  For example:   index=indexname |stats count by domain,src_ip | sort - count |stats list(count . " (" . domain . ")") as DomainCount, sum(count) as total by src_ip |sort - total DomainCount | head 10 |fields - total   Just note that this is mathematically equivalent to your code.  So, you will need to illustrate the output and explain why that's not the desired result.
index="index1" |lookup lookup ip_address as src_ip | where isnotnull(Cidr)
What do you mean by transaction?
The requirement is a little vague.  Even so, you can probably do without join, which is more expensive than other options.  Most commonly, stats is your friend. Best I can speculate your intention i... See more...
The requirement is a little vague.  Even so, you can probably do without join, which is more expensive than other options.  Most commonly, stats is your friend. Best I can speculate your intention is to match select fields these source by a common data point you call dest_ip which can have a different field name in each data source.  Something like (index=*-palo threat="SMB: User Password Brute Force Attempt(40004)" src=* dest_port=445) OR (index=*-sep device_ip=*) OR (index="*wineventlog" src_ip=*) | eval dest_ip=coalesce(dest, device_ip, src_ip) | eval "Palo Detected User" = if(match(index, "-palo"), user, null()) | rename user as username | fields future_use3 src_ip dest_ip dest_port "Palo Detected User" device_name user_name rule threat repeat_count action ComputerName username | stats values(*) as * by dest_ip | sort src_ip | rename future_use3 AS "Date/Time" src_ip AS "Source IP" dest_ip AS "Destination IP" user_name AS "Symantec Detected User @ Destination" device_name AS "Symantec Destination Node" rule AS "Firewall Rule" threat as "Threat Detected" action as "Action" repeat_count AS "Repeated Times"  
> Invalid key in stanza [distributedSearch] in /opt/splunk/etc/system/local/distsearch.conf, line 2: useIPAddrAsHost (value: false). You get above messages because `useIPAddrAsHost` is not part of... See more...
> Invalid key in stanza [distributedSearch] in /opt/splunk/etc/system/local/distsearch.conf, line 2: useIPAddrAsHost (value: false). You get above messages because `useIPAddrAsHost` is not part of the distsearch.conf.spec file. Apart from above message, it works.  >Are you sure about one sslVerifyServerName = true automatically sets useIPAddressAsHost = false. From 9.1 onwards.
You could just take the last line using tail index="" source="" | fields queryHits | table queryHits | addcoltotals labelfield=total label="queryHits" | tail 1 but there's a better way to get jus... See more...
You could just take the last line using tail index="" source="" | fields queryHits | table queryHits | addcoltotals labelfield=total label="queryHits" | tail 1 but there's a better way to get just the total Index="" source="" | stats sum(queryHits) as queryHits  
Are you sure about one sslVerifyServerName = true automatically sets useIPAddressAsHost = false. On the server sslVerifyServerName is the Splunk search head communicating to the indexers for distrib... See more...
Are you sure about one sslVerifyServerName = true automatically sets useIPAddressAsHost = false. On the server sslVerifyServerName is the Splunk search head communicating to the indexers for distributed search, the sslVerifyServerName is asking for the certificate CN or SAN to match the server name returned.  It's the Indexer that has to respond to the search with it's server name and not IP.   Seems like these two attributes should be on separate hosts. I am trying to understand as well, but I don't.  Right now we're on 9.0.2 and I'm now getting the below error when using this attribute, where I didn't before:  - Invalid key in stanza [distributedSearch] in /opt/splunk/etc/system/local/distsearch.conf, line 2: useIPAddrAsHost (value: false).   Thanks much in advance.
Just a note you might have to define a function in your python code something like: def summary():       <your python code>   Hope that helps!
  I have used the below query to get the total from that column Index="" source="" | fields queryHits | table queryHits | addcoltotals labelfield=total label="queryHits"... Now how do i get o... See more...
  I have used the below query to get the total from that column Index="" source="" | fields queryHits | table queryHits | addcoltotals labelfield=total label="queryHits"... Now how do i get only the last row which is the total to display in my dashboard.I tried using stats count but its not fetching the correct vaue      
Match type has no meaning with inputlookup. Your subsearch will get expanded to a set of conditions like (src_ip="1.2.3.4/24) OR (src_ip="4.5.6.7/23") OR ...) Verify your expanded search in job da... See more...
Match type has no meaning with inputlookup. Your subsearch will get expanded to a set of conditions like (src_ip="1.2.3.4/24) OR (src_ip="4.5.6.7/23") OR ...) Verify your expanded search in job dashboard if it matches the field naming in your events.
I don't know what those numbers are but remember that just because you're ingesting 1GB of data daily doesn't mean you're gonna consume 1GB of disk space daily. Firstly, you store compressed raw dat... See more...
I don't know what those numbers are but remember that just because you're ingesting 1GB of data daily doesn't mean you're gonna consume 1GB of disk space daily. Firstly, you store compressed raw data. It's gzipped so it compresses fairly well as text data generally does. It takes up around 1/7, maybe 1/6 of the original raw data size on average. Along that you store index files which make up around another 1/3 of the original raw data size. Roughly estimating you need about 1/2 of the original raw data to store just index data. But if you don't use separate storage you also need to account for additional summaries if you use accelerations.
@ITWhisperer i mean id, t , … key value extracted not transaction.
Not sure I understand, you just said all fields already extracted?
@ITWhisperer  How about other part?  FYI: i mean extract key value one by one with rex command not whole transaction.
Your events will be together by ID
@ITWhisperer ?
I am getting getting extracted_host, extracted_source, extracted_sourcetype fields in interesting fields along with host, source, sourcetype in selected fields while ingesting logs using HEC input in... See more...
I am getting getting extracted_host, extracted_source, extracted_sourcetype fields in interesting fields along with host, source, sourcetype in selected fields while ingesting logs using HEC input in Splunk Cloud. Can someone help why I am gettin extracted_host, extracted_source, extracted_sourcetype fields in the logs even if they are not define in the source end.