All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @jmmontejo, I am unable to run the Dashboard in Splunk. Can you please paste the full XML?
Hello @10061987, You can use fields from the first line of the results in the alert, e.g. $result.email$ assuming your search includes the email field. Then if you trigger the alert for each result ... See more...
Hello @10061987, You can use fields from the first line of the results in the alert, e.g. $result.email$ assuming your search includes the email field. Then if you trigger the alert for each result (rather than just once), each result will execute the action with its corresponding row from the events. Reference - https://community.splunk.com/t5/Alerting/Splunk-Alerts-How-to-use-email-address-from-variable/m-p/633020#:~:text=You%20can%20use%20fields%20from,corresponding%20row%20from%20the%20events.   Please accept the solution and hit Karma, if this helps!
Hello @nina, There are a few ways -   - If you are planning to showcase some use cases as a part of Project - Splunk Security Essentials (https://splunkbase.splunk.com/app/3435) does have some built... See more...
Hello @nina, There are a few ways -   - If you are planning to showcase some use cases as a part of Project - Splunk Security Essentials (https://splunkbase.splunk.com/app/3435) does have some built-in datasets. For example for Sample Brute Force Attack Detection - https://github.com/splunk/botsv3 does have a number of sample datasets for multiple sourcetypes - You can use EventGen (https://splunkbase.splunk.com/app/1924) to generate "more" events based on existing event formats.   Please accept the solution and hit Karma, if this helps!
Hi @gcusello ,  Thanks for replying. I am using a standalone search head. Would like to move to a standalone search head and not a search head cluster. Is that  the same process for migrating apps t... See more...
Hi @gcusello ,  Thanks for replying. I am using a standalone search head. Would like to move to a standalone search head and not a search head cluster. Is that  the same process for migrating apps to standalone search head?
Hello @grotti, If I understand the issue correctly, you are getting the expected results, but not for 12 hours. Is that right? If so, you can use "| addinfo" command as below -  | inputlookup appe... See more...
Hello @grotti, If I understand the issue correctly, you are getting the expected results, but not for 12 hours. Is that right? If so, you can use "| addinfo" command as below -  | inputlookup append=T incident_review_lookup | addinfo | where time>=info_min_time | rename user as reviewer | `get_realname(owner)` | `get_realname(reviewer)` | eval nullstatus=if(isnull(status),"true","false") | `get_reviewstatuses` | eval status=if((isnull(status) OR isnull(status_label)) AND nullstatus=="false",0,status) | eval status_label=if(isnull(status_label) AND nullstatus=="false","Unassigned",status_label) | eval status_description=if(isnull(status_description) AND nullstatus=="false","unknown",status_description) | eval _time=time | fields - nullstatus It would give you the results based on whatever time range you are selecting from time range picker. Please accept the solution and hit Karma, if this helps!
Hi @gcusello  I can explain with some screenshots the problem: The logs are related with an Antivirus (policies, detected viruses and so on), in the first image you can see the file was created at ... See more...
Hi @gcusello  I can explain with some screenshots the problem: The logs are related with an Antivirus (policies, detected viruses and so on), in the first image you can see the file was created at 00:35:00 , this is an Antivirus Scan This is the content of the file:   ....but as you can see timestamp shows 06:35 (That's why I added the TZ option in the props.conf)     Finally this is an image of the Splunk search, the _time column is aligned with the timestamp with the log content  The register was supposed to arrive at 00:35, but was entered at 06:35. (6 hours after the scan) The hour is set at GMT-6. I tried to look the AV settings to set the time at GMT-6 but it does not have that option.  
Hi does the new export feature also allow to export a table panel to PDF that includes the entrire data set, in case the data overflows the size of single table page and pagination kicks into play h... See more...
Hi does the new export feature also allow to export a table panel to PDF that includes the entrire data set, in case the data overflows the size of single table page and pagination kicks into play hidding remaining events in other pages? Thank you Wojtek 
Hello everyone, I'm working on a project ''Splunk Enterprise: An organization's go-to in detecting cyber threats''  please how/where can I get datasets and logs that I will use for my project.
Thank you for clarifying that
Hi @felipesodre , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @ucorral, could you share some sample of your logs? Ciao. Giuseppe
@gcusello I added the INDEXED_EXTRACTIONS=csv, then I restarted the splunk daemon.   [my_custom_sourcetype] CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv TIME_FORMAT=%Y-%m-%dT%H:%M:%S, TIME_PREFIX=^ LINE_B... See more...
@gcusello I added the INDEXED_EXTRACTIONS=csv, then I restarted the splunk daemon.   [my_custom_sourcetype] CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv TIME_FORMAT=%Y-%m-%dT%H:%M:%S, TIME_PREFIX=^ LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TZ=America/Mexico_City disabled=false   But I continue receiving logs from 6 hours ago.    Copying the last log received in Splunk 9/30/23 6:35:02.000 AM 2023-09-30T06:35:02,Time of completion: 00:35:02 ***** 0 sec (00:00:00) host = ******* source = /var/log/****/*****log.****.txtsourcetype = my_custom_sourcetype   as you can see the last log have received at 06:35:02am -> but was created at 00:35:02 of my current time in Mexico City. At the moment no more logs showed in splunk     But now I realized the logs come split for some reason.
Hi I'm currently working on obtaining Windows Filtering Platform event logs to identify the user responsible for running an application. My goal is to enhance firewall rules by considering both the ... See more...
Hi I'm currently working on obtaining Windows Filtering Platform event logs to identify the user responsible for running an application. My goal is to enhance firewall rules by considering both the application and the specific user. To achieve this, I've set up a system to send all logs to Splunk, which is already operational. However, I've encountered an issue with WFP event logs not displaying the authorized principal user who executed the application. This absence of user information makes it challenging to determine who used what application before I can further refine the firewall rules. If you have any insights or suggestions on how to address this issue, I would greatly appreciate your assistance. I can readily access various details such as destination, source, port, application, and protocol, but the missing username is a crucial piece of information I need. Thank you for any guidance you can provide.
Happy that worked for you!! Happy Splunking
Soon or later the default.xml need to be deleted to allow new menu items from an updated app to appear.
HI @Utkc137, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hey thanks a ton! Been breaking my head on this issue. 
Hi @Utkc137. sorry, what's the difference? Ciao. Giuseppe
The idea is to find discrete count of id's where A=1, B=1. Not the count of events where  these values are 1.
Hi, Please try below: | stats max(A) as ACnt, max(B) as BCnt, max(C) as CCnt by month, id | stats sum(ACnt) as ACnt, sum(BCnt) as BCnt, sum(CCnt) as CCnt by month