All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Kiran,  Here the action has two fields, Success and failure, I need to list success and failure user by month for past one year.
@navan1  Try like this. I don't have action=success and action=failure events so i just gave randomly. Please modify your query as per your requirement.  I hope this helps, if any reply helps y... See more...
@navan1  Try like this. I don't have action=success and action=failure events so i just gave randomly. Please modify your query as per your requirement.  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.  
@shenoyveer You can use trellis, split by host.  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.  
@shenoyveer  Can you check this?  The x-axis will be the first column/field in the results, so put the date as the first field after the by.        
is there any alternative to timechart?   I want to have particular date in x axis
Hello Team,    How to search specific app user successful and failure events by month for Jan to Dec? Base search,   index=my_index app=a | table app action user |eval Month=strftime(_tim... See more...
Hello Team,    How to search specific app user successful and failure events by month for Jan to Dec? Base search,   index=my_index app=a | table app action user |eval Month=strftime(_time,"%m") |stats count by user Month I am not getting any result by above search.    
@shenoyveer  If you take out the eval from the query, you must also remove the date field from it. index="vm-details" | timechart count by host
Recently our splunk security alert integration has stopped working last month (December) where we'd send an alert automatically from splunk cloud to our onmicrosoft.com@amer.teams.ms e-mail. Is th... See more...
Recently our splunk security alert integration has stopped working last month (December) where we'd send an alert automatically from splunk cloud to our onmicrosoft.com@amer.teams.ms e-mail. Is the support of this being deprecated on the Microsoft side? Or is this a whitelisting issue? Anyone else experience a similar problem?
Thank you @ITWhisperer  In my laymans terms use eventstats search to find max(triggeredEventNumber) BY userName. Next use where to select only that max(triggeredEventNumber) for results. Then I use... See more...
Thank you @ITWhisperer  In my laymans terms use eventstats search to find max(triggeredEventNumber) BY userName. Next use where to select only that max(triggeredEventNumber) for results. Then I used stats values(field) to extract the values for the fields I am interested in.
Tried with another query too index="testing" | eval date=strftime(_time, "%Y-%m-%d") | stats count by date, host but still its not showing hostname in dashboard    
Thanks @gcusello for your response and guidance.  I tried your query and it's giving me results, however it's only populating 5 fields which are ones that are common to both indices.   How do you ... See more...
Thanks @gcusello for your response and guidance.  I tried your query and it's giving me results, however it's only populating 5 fields which are ones that are common to both indices.   How do you suggest I modify the query so the output also displays all the following fields that are under index="*firewall*"?     src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name         My intent is to display the data in the following order (that includes fields from both indices) if possible - ***specific fields from index=corelight are in bold for reference src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC", ssl_version AS Version, ssl_cipher AS ENCRPT_ALGO   Thanks!
Hi Kiran,   Thanks for the prompt reply. Its not working for me and after removing eval no data popping up.  my goad is to get the hostname while hovering the host_count variable from query
Here is my raw data in the splunk query <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/20... See more...
Here is my raw data in the splunk query <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <application xmlns="http://www.abc.com/services/listService"> <header> <user>def@ghi.com</user> <password>al3yu2430nald</password>   If I want to mask the password value and show in the splunk output as: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <application xmlns="http://www.abc.com/services/listService"> <header> <user>def@ghi.com</user> <password>xxxxxxxxxxxx</password> How can I do that?
Hi @sdcig , as you experienced, don't use join because it's very slow! use the OR conditions correlated using stats, something like this to adapt to your requirements: (index=*corelight* sourcetyp... See more...
Hi @sdcig , as you experienced, don't use join because it's very slow! use the OR conditions correlated using stats, something like this to adapt to your requirements: (index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*) OR (index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100) | dedup src_ip, dest_ip | fields src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | stats values(*) AS * BY dest_ip | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC" If possible, try to avoid to use asterisk at the beginning of a string (as in your case in index, sourcetype and host). Ciao. Giuseppe
Hello, I have 2 queries where indices are different and have a common field dest_ip which is my focus(same field name in both indices). Please note that there are also some other common fields such ... See more...
Hello, I have 2 queries where indices are different and have a common field dest_ip which is my focus(same field name in both indices). Please note that there are also some other common fields such as src_ip, action etc.   Query 1:   index=*corelight* sourcetype=*corelight* server_name="*microsoft.com*   additional fields: action, ssl_version, ssl_cipher   Query 2:   index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100   additional fields: _time, src_zone, src_ip, dest_zone, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name     I'm trying to output all the corresponding server_names for each dest_ip, as a table with all the listed fields from both query outputs   I'm new to Splunk and learning my way; I've tried the following so far -   A) using join (which is usually very slow and sometimes doesn't give me a result)   index=*corelight* sourcetype=*corelight* server_name=*microsoft.com* | join dest_ip [ search index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100 | fields src_ip, src_user, dest_ip, rule, action, app, transport, version, session_end_reason, dvc_name, bytes_out ] | dedup server_name | table _time, src_ip, dest_ip, transport, dest_port, app, rule, server_name, action, session_end_reason, dvc_name | rename _time as "timestamp", transport as "protocol"     b) using an OR    (index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*) OR (index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100) | dedup src_ip, dest_ip | table src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"       My questions - Would you suggest a better way to write/construct my above queries?   In my OR output, I only see a couple of columns populating values (eg. src_ip, dest_ip, action) while the rest are empty. My guess is they're populating because I'm doing an inner join and these are the common fields between the two. Since I'm unable to populate the others, maybe I need to do a left join?   Can you kindly guide me on how to rename fields specific to each index when combining queries using OR? I've tried a few times but haven't been successful For example, in my above OR statement - how and where in the query do I rename the field ssl_cipher in index=*corelight* to ENCRYPT_ALGORITHM?    Many thanks!
Yesterday I upgraded Splunk on one of my Deployment Servers from 9.3.1 with the 9.4.0 rpm on a Amazon Linux host and ran into the following error after starting splunk with: /opt/splunk/bin/splunk s... See more...
Yesterday I upgraded Splunk on one of my Deployment Servers from 9.3.1 with the 9.4.0 rpm on a Amazon Linux host and ran into the following error after starting splunk with: /opt/splunk/bin/splunk start --accept-license --no-prompt --answer-yes (typical batch of startup messages here ... until) sh: line 1: 16280 Segmentation fault      (core dumped) splunk migrate renew-certs 2>&1 ERROR while running renew-certs migration. Repeated attempts at starting failed to render anything different. Ended up having to revert to the prior version. This is, in fact, the first failed upgrade I've had since I started using this product over 10 years ago. I have backed out of the upgrade, but considering the vagueness of this error message, I'm asking the community if anyone has seen this before.   
Hi @avikc100 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
You're right... the original question wasn't clear enough.  Well it was to me... but that is always the case I suppose! I'll consider using the existing Web DM or potentially creating a new one that... See more...
You're right... the original question wasn't clear enough.  Well it was to me... but that is always the case I suppose! I'll consider using the existing Web DM or potentially creating a new one that will allow a little more customization for what I'm after. Thank you for the input.
It was not clear from the OP that the data comes from clients rather than servers/proxies - just that the data is *about* clients.  That doesn't mean the DM can't be used, but some fields won't apply... See more...
It was not clear from the OP that the data comes from clients rather than servers/proxies - just that the data is *about* clients.  That doesn't mean the DM can't be used, but some fields won't apply.  It's normal for a DM to not have all fields populated.
Hi @richgalloway - I considered this one.  The description is: The fields in the Web data model describe web server and/or proxy server data in a security or operational context. Looking at the... See more...
Hi @richgalloway - I considered this one.  The description is: The fields in the Web data model describe web server and/or proxy server data in a security or operational context. Looking at the fields in this data model, this seems to me to be geared more for web servers, not the clients of those servers. Many recommended fields in this data model would not apply to the web browsing logs from the client's perspective.  Is attempting to squeeze logs from the clients into this data model commonly done? And to answer your questions - We have other data (from web servers) which use the Web data model. Furthermore, the data I want to group/find with this search is definitely NOT CIM compliant.  As the number of data sources for web browsing is high for our environment (something like 10+ sources), many of the sources do not have the same information available.  I'm building a list of fields myself to standardize the names and would ideally map them to a data model.