All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Generally, system-wide stuff is in etc/system and etc/apps whereas users' content is in etc/users. You need to remember however that your stuff probably depends on add-ons which do the extractions. I... See more...
Generally, system-wide stuff is in etc/system and etc/apps whereas users' content is in etc/users. You need to remember however that your stuff probably depends on add-ons which do the extractions. If you use datamodels, they should be properly defined and configured. And so on.
There is also an app providing a more flexible email command than the builtin sendemail one. Don't remember the name but ir's easily findable on Splunkbase. Having said that, remember that it's risk... See more...
There is also an app providing a more flexible email command than the builtin sendemail one. Don't remember the name but ir's easily findable on Splunkbase. Having said that, remember that it's risky to use "external" data this way because you might end up sending emails (and a lot of them sometimes) to non-existent, or empty email addresses.
Hi @PickleRick ,  Thanks for replying,  my issue is that Splunk SH is running on Linux 6 and I have to migrate it to Linux 8 because Splunk 9.1 is not supported on Legacy Linux. So I have built a n... See more...
Hi @PickleRick ,  Thanks for replying,  my issue is that Splunk SH is running on Linux 6 and I have to migrate it to Linux 8 because Splunk 9.1 is not supported on Legacy Linux. So I have built a new instance and added to cluster, as a standalone SH it can search the data and I can make it primary, but not sure how to copy Dashboards/alerts built by users that are no longer active. So that's where I am looking for options to copy it from old instance to new before making it active.   
Well...the idea is relatively easy (you want to capture old SH state and set the new SH to the same state) but the details can be tricky. Generally, you want to move the config files (both system-wi... See more...
Well...the idea is relatively easy (you want to capture old SH state and set the new SH to the same state) but the details can be tricky. Generally, you want to move the config files (both system-wide and users' ones) and kvstore state. The problem is that if you're migrating, you might not need some stuff if you'll be deploying on a new instance (like certs might need to be generated for a new name) so you'll have to be selective about restoring the configs.
Hello @jmmontejo, I am unable to run the Dashboard in Splunk. Can you please paste the full XML?
Hello @10061987, You can use fields from the first line of the results in the alert, e.g. $result.email$ assuming your search includes the email field. Then if you trigger the alert for each result ... See more...
Hello @10061987, You can use fields from the first line of the results in the alert, e.g. $result.email$ assuming your search includes the email field. Then if you trigger the alert for each result (rather than just once), each result will execute the action with its corresponding row from the events. Reference - https://community.splunk.com/t5/Alerting/Splunk-Alerts-How-to-use-email-address-from-variable/m-p/633020#:~:text=You%20can%20use%20fields%20from,corresponding%20row%20from%20the%20events.   Please accept the solution and hit Karma, if this helps!
Hello @nina, There are a few ways -   - If you are planning to showcase some use cases as a part of Project - Splunk Security Essentials (https://splunkbase.splunk.com/app/3435) does have some built... See more...
Hello @nina, There are a few ways -   - If you are planning to showcase some use cases as a part of Project - Splunk Security Essentials (https://splunkbase.splunk.com/app/3435) does have some built-in datasets. For example for Sample Brute Force Attack Detection - https://github.com/splunk/botsv3 does have a number of sample datasets for multiple sourcetypes - You can use EventGen (https://splunkbase.splunk.com/app/1924) to generate "more" events based on existing event formats.   Please accept the solution and hit Karma, if this helps!
Hi @gcusello ,  Thanks for replying. I am using a standalone search head. Would like to move to a standalone search head and not a search head cluster. Is that  the same process for migrating apps t... See more...
Hi @gcusello ,  Thanks for replying. I am using a standalone search head. Would like to move to a standalone search head and not a search head cluster. Is that  the same process for migrating apps to standalone search head?
Hello @grotti, If I understand the issue correctly, you are getting the expected results, but not for 12 hours. Is that right? If so, you can use "| addinfo" command as below -  | inputlookup appe... See more...
Hello @grotti, If I understand the issue correctly, you are getting the expected results, but not for 12 hours. Is that right? If so, you can use "| addinfo" command as below -  | inputlookup append=T incident_review_lookup | addinfo | where time>=info_min_time | rename user as reviewer | `get_realname(owner)` | `get_realname(reviewer)` | eval nullstatus=if(isnull(status),"true","false") | `get_reviewstatuses` | eval status=if((isnull(status) OR isnull(status_label)) AND nullstatus=="false",0,status) | eval status_label=if(isnull(status_label) AND nullstatus=="false","Unassigned",status_label) | eval status_description=if(isnull(status_description) AND nullstatus=="false","unknown",status_description) | eval _time=time | fields - nullstatus It would give you the results based on whatever time range you are selecting from time range picker. Please accept the solution and hit Karma, if this helps!
Hi @gcusello  I can explain with some screenshots the problem: The logs are related with an Antivirus (policies, detected viruses and so on), in the first image you can see the file was created at ... See more...
Hi @gcusello  I can explain with some screenshots the problem: The logs are related with an Antivirus (policies, detected viruses and so on), in the first image you can see the file was created at 00:35:00 , this is an Antivirus Scan This is the content of the file:   ....but as you can see timestamp shows 06:35 (That's why I added the TZ option in the props.conf)     Finally this is an image of the Splunk search, the _time column is aligned with the timestamp with the log content  The register was supposed to arrive at 00:35, but was entered at 06:35. (6 hours after the scan) The hour is set at GMT-6. I tried to look the AV settings to set the time at GMT-6 but it does not have that option.  
Hi does the new export feature also allow to export a table panel to PDF that includes the entrire data set, in case the data overflows the size of single table page and pagination kicks into play h... See more...
Hi does the new export feature also allow to export a table panel to PDF that includes the entrire data set, in case the data overflows the size of single table page and pagination kicks into play hidding remaining events in other pages? Thank you Wojtek 
Hello everyone, I'm working on a project ''Splunk Enterprise: An organization's go-to in detecting cyber threats''  please how/where can I get datasets and logs that I will use for my project.
Thank you for clarifying that
Hi @felipesodre , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @ucorral, could you share some sample of your logs? Ciao. Giuseppe
@gcusello I added the INDEXED_EXTRACTIONS=csv, then I restarted the splunk daemon.   [my_custom_sourcetype] CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv TIME_FORMAT=%Y-%m-%dT%H:%M:%S, TIME_PREFIX=^ LINE_B... See more...
@gcusello I added the INDEXED_EXTRACTIONS=csv, then I restarted the splunk daemon.   [my_custom_sourcetype] CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv TIME_FORMAT=%Y-%m-%dT%H:%M:%S, TIME_PREFIX=^ LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TZ=America/Mexico_City disabled=false   But I continue receiving logs from 6 hours ago.    Copying the last log received in Splunk 9/30/23 6:35:02.000 AM 2023-09-30T06:35:02,Time of completion: 00:35:02 ***** 0 sec (00:00:00) host = ******* source = /var/log/****/*****log.****.txtsourcetype = my_custom_sourcetype   as you can see the last log have received at 06:35:02am -> but was created at 00:35:02 of my current time in Mexico City. At the moment no more logs showed in splunk     But now I realized the logs come split for some reason.
Hi I'm currently working on obtaining Windows Filtering Platform event logs to identify the user responsible for running an application. My goal is to enhance firewall rules by considering both the ... See more...
Hi I'm currently working on obtaining Windows Filtering Platform event logs to identify the user responsible for running an application. My goal is to enhance firewall rules by considering both the application and the specific user. To achieve this, I've set up a system to send all logs to Splunk, which is already operational. However, I've encountered an issue with WFP event logs not displaying the authorized principal user who executed the application. This absence of user information makes it challenging to determine who used what application before I can further refine the firewall rules. If you have any insights or suggestions on how to address this issue, I would greatly appreciate your assistance. I can readily access various details such as destination, source, port, application, and protocol, but the missing username is a crucial piece of information I need. Thank you for any guidance you can provide.
Happy that worked for you!! Happy Splunking
Soon or later the default.xml need to be deleted to allow new menu items from an updated app to appear.
HI @Utkc137, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors