All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After pulling cases from ES to Phantom a certain label is assigned to the event , later it is automatically promoted to a case .  i have created an playbook that assign labels to the promoted cases ... See more...
After pulling cases from ES to Phantom a certain label is assigned to the event , later it is automatically promoted to a case .  i have created an playbook that assign labels to the promoted cases (based on the triggered splunk rule) and it works 99% of the times but sometimes i get 2 identical cases with different labels (the newly assign one and the one that is configured in the Splunk app). has anyone encountered this issue before ? 
We moved from Splunk Enterprise to Splunk Cloud a few years ago. To migrate all our objects we packaged all apps with the CLI package command and uploaded them to Splunk Cloud. This command merges ... See more...
We moved from Splunk Enterprise to Splunk Cloud a few years ago. To migrate all our objects we packaged all apps with the CLI package command and uploaded them to Splunk Cloud. This command merges everything from the local to the default folder as stated here: Package apps | Documentation | Splunk Developer Program  Unfortunately the consequence is that these objects are not editable via UI anymore. A number of changes don't apply, even though the UI doens't provide me an error. (e.g. re-assigning an orphaned search, or deleting an old object). To work around this issue, we asked for an export of the app via Splunk Support (there is no way of doing this via API as far as I can find) so we could change the app. But if we change the app and repackage it, than all local objects again will move to the default folder, making our problem in the future even worse. I always used the "package" CLI command which does this local to default folder merge. Is the Packaging Toolkit working in the same way? I don't have experience with it. If it is able to keep objects in the local folder, than it might save us... Any other idea to overcome this situation welcome as well... Thanks!
Hi, hopefully this is the right place to ask. I am pretty new to MS SQL as well as Splunk, so am curious what is the simplest way to pipe MS SQL data (the Change Data Capture data/table in particular... See more...
Hi, hopefully this is the right place to ask. I am pretty new to MS SQL as well as Splunk, so am curious what is the simplest way to pipe MS SQL data (the Change Data Capture data/table in particular) to Splunk, and wondering if anyone here has done/tried it? I currently have Universal Forwarder set up on my Windows machine, and able to pipe Event Viewer stuffs to Splunk. Looked into Splunk DB Connect, but the setup process seems to be a little too complicated for me (installed Java, but not sure how to go from there). I am unsure if I am able to achieve what I want through Universal Forwarder (as my MS SQL uses Windows Authentication and from what I've read it says Windows Authentication is not supported in Universal Forwarder. Do correct me if I am wrong.). Appreciate any help.
Hi, Is there a way to get current time on Splunk and then convert it to epoch? Im trying to create a dashboard to show inactivity from my data sources and plan to use info from | metadata command.
We have datamodel which has 2 level DataSet(Datamodel-> Parent Dataset -> Child Dataset). We have defiend a field in Child Dataset and we are able to see that field value on preview.  Datamodel: Cat... See more...
We have datamodel which has 2 level DataSet(Datamodel-> Parent Dataset -> Child Dataset). We have defiend a field in Child Dataset and we are able to see that field value on preview.  Datamodel: Catalyst_App Parent Dataset: Catalyst_Dataset Child Dataset: Security_Advisories_Events Field: Category So when we are trying to run the following tstats query: | tstats summariesonly=false values(Catalyst_Dataset.Security_Advisories_Events.Category) from datamodel=Catalyst_App where nodename=Catalyst_Dataset.Security_Advisories_Events We are getting no results. But at the same time when we run the following datamodel query: | datamodel Catalyst_App Security_Advisories_Events search | fillnull value="-" | table Catalyst_Dataset.Security_Advisories_Events.Category We are getting category values in datamodel query.  
I can create a query and produce a time chart so I can see the load across the set of cpu   |timechart values(VALUE) span=15m by cpu limit=0      I can see a trend that one cpu has a higher lo... See more...
I can create a query and produce a time chart so I can see the load across the set of cpu   |timechart values(VALUE) span=15m by cpu limit=0      I can see a trend that one cpu has a higher loader I can also create a query using the stats to get the avg/Max/Range  of the load value   stats max(VALUE) as MaxV, mean(VALUE) as MeanV, range(VALUE) as Delta by _time     What I want to do is identify any CPU  that's running a higher load than avg plus some sort of fiddle factor?
Looking for ways to refresh client list that phone homes to Deployment server without restarting splunk service or taking access of server. We have few sources onboarded that recycles their instance... See more...
Looking for ways to refresh client list that phone homes to Deployment server without restarting splunk service or taking access of server. We have few sources onboarded that recycles their instances every 24 hours, within few days the count of clients becomes 4 times our usual number and unless done something DS will become slower, the only way to reset this list seems to be via splunk restart which we want to avoid. Anyone face something similar?
I'm running Splunk Enterprise 9.1.1.  It is a relatively fresh installation (done this year).  Splunk forwarders are also using version 9.1.1 of the agent. The indexer is also the deployment server.... See more...
I'm running Splunk Enterprise 9.1.1.  It is a relatively fresh installation (done this year).  Splunk forwarders are also using version 9.1.1 of the agent. The indexer is also the deployment server.  Beyond that, I only have forwarders forwarding to it.  I have one Linux host (Redhat 8.9) with this problem.  I've deployed Splunk_TA_nix and enabled rlog.sh to show info from /var/log/audit/audit.log. Using today as an example (06/05/2024), I don't see entries for 06/05/2024.  But I do see logs from today under 05/06/2024. Example from the splunk search page: index="linux_hosts" host=bad_host          (last 30 days) 05/06/2024 at left side of events     audit data...........(06/05/2024 14:32:12) audit data......... As I mentioned above, I have one deployment server.   All forwarders are using the same/centralized.   Small environment, I'd say ~25 linux hosts (redhat 7 and 8).  This is the only Redhat 8 with this problem. Tried reinstalling splunk forwarder (completely deleted /path/to/splunkforwarder) once I uninstalled it. I knowa little about using props.conf with TIME_FORMAT and have not done so.  My logic is if I needed it, I'd see this on all forwarders not just the one i have with this problem. I did localectl and it shows en_US.  ausearch -i (same thing rlog.sh does) shows the dates/times as I'd expect.  Anything else I should look for  from the OS perspective?  Any suggestions on what I could do from splunk?  Also, noticed that when I go to the _internal index, dates/times are consistent.  When I use my normal index (linux_hosts) this is my one RH8 that has this problem.  Other Redhat 8 are what I'd expect. A side note here: someone else suspected this host wasn't logging.  So they did a manual import of the audit.log files.  Mind you, the dates in the file were not parsed since they didn't go through rlog.sh (ausearch -i) first.  Could this also be part of the problem?  If so, how can I undo what was done?   Thanks!
Hi , We have 2 HF active and passive, I shut off the Splunk service on 1 HF. I want to be alerted only when my 2 HFs are not sending logs/splunk service is down.  I don’t want any alerts at leas... See more...
Hi , We have 2 HF active and passive, I shut off the Splunk service on 1 HF. I want to be alerted only when my 2 HFs are not sending logs/splunk service is down.  I don’t want any alerts at least when one of the HF is running.
I have a field payload containing the following JSON:   { "cacheStats": { "lds:UiApi.getRecord": { "hits": 0, "misses": 1 } }   I can normally ... See more...
I have a field payload containing the following JSON:   { "cacheStats": { "lds:UiApi.getRecord": { "hits": 0, "misses": 1 } }   I can normally use spath to retrieve the hits and misses values:   cacheRecordHit=spath(payload,"cacheStats.someCacheProperty.hits")   But it seems the period and possibly the colon of the lds:UiApi.getRecord property are preventing it from navigating the JSON, such that:   | eval cacheRecordHit=spath(payload,"cacheStats.lds:UiApi.getRecord.hits")     returns no data.  I have tried the solution in this answer:   | spath path=payload output=convertedPayload | eval convertedPayload=replace(convertedPayload,"lds:UiApi.getRecord","lds_UiApi_getRecord") | eval cacheRecordHit=spath(convertedPayload,"cacheStats.lds:UiApi.getRecord.hits") | stats count,sum(hits)   but hits still returns as null. Appreciate any insights.  
I'm considering loading readable/textual  files , from different formats, into splunk for getting the benefits of indexing and fast searching. Thh files are static and don't change like regular logs.... See more...
I'm considering loading readable/textual  files , from different formats, into splunk for getting the benefits of indexing and fast searching. Thh files are static and don't change like regular logs. Is this use case supported by splunk??
Hello All,   The question is is IOWAIT mean anything? I am in the process of upgrading Splunk 8.2.12 to 9.1.2, and then 9.2.1.  I have not yet upgraded to 9.1.2. The Health Report is set at defau... See more...
Hello All,   The question is is IOWAIT mean anything? I am in the process of upgrading Splunk 8.2.12 to 9.1.2, and then 9.2.1.  I have not yet upgraded to 9.1.2. The Health Report is set at default settings i.e. 3, etc.I have tried the suggestion of doubling threshold vales, but eventually get a Warning yellow, or sometimes red, etc. I am running Splunk Enterprise 8.2.12 on an Oracle Linux (ver 7.9) with 12 cpu and 64GB memory.  Do these settings have any benefit for the IOWAIT thresholds?   I see where I can disable IOWAIT - or does it make any sense to try to generate some sort if Diag, which has a link when opeing the "Health Report Manager" Any info here? Am I missing something? Thanks as always for a very helpful Splunk community.   EWHOLZ   I
We see cases where warm buckets are not being moved to cold storage for six weeks, and we wonder how to set it up correctly so they move within two or three weeks.
Hi All,   Has anyone explored the https://github.com/splunk/splunk-conf-imt ? We  have splunk cloud, wondering how can I proceed with testing this as the steps are not quite clear to me. Appreciate... See more...
Hi All,   Has anyone explored the https://github.com/splunk/splunk-conf-imt ? We  have splunk cloud, wondering how can I proceed with testing this as the steps are not quite clear to me. Appreciate the help.
When upgrading to 9.2.1 Getting "Waiting for web server at https://xxxx.443 to be available..................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslCo... See more...
When upgrading to 9.2.1 Getting "Waiting for web server at https://xxxx.443 to be available..................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details."   Splunk is starting but Webserver is not starting and front end is not accessble
Hello, I have recently started working with SPLUNK Enterprise and I would like to use it as a SIEM for my network. I have successfully integrated data into SPLUNK from my server and created an alert... See more...
Hello, I have recently started working with SPLUNK Enterprise and I would like to use it as a SIEM for my network. I have successfully integrated data into SPLUNK from my server and created an alert if certain conditions are met. In order to send an email when an alert is triggered, I created an SMTP connector using the "Exchange Admin Center". I then configured the mail server on SPLUNK, but when an alert is created on SPLUNK, I do not receive any emails. I am wondering if the issue is with the connector I created or if it could be something else. What is the procedure to create an SMTP connector and ensure that the email can be sent from SPLUNK? Thank you for reading.
How to map mitre attack content in Splunk Security Essentials? I want to map mitre attack for all of my created alert inside of splunk entreprise
I want to link OpenCTI with Splunk ES to be on top of the threats
Good morning,     I recently created a tag for a set of hosts. For example, CA for all California hosts. Does this take time to populate or show up within my Data Models?  I am running a search sim... See more...
Good morning,     I recently created a tag for a set of hosts. For example, CA for all California hosts. Does this take time to populate or show up within my Data Models?  I am running a search similar to this  | tstats count FROM datamodel=<data_model>.<root_event> WHERE tag=CA BY _time, host, etc....   I have also tried this  | datamodel  <data_model> <root_event> search  | search tag=CA | table _time, host, etc....