All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

<something> Status!=FIXED
We pull weekly vulnerability reports from Splunk associated with our Qualys data.  I am trying to filter out all records associated with a hostname if the status field equals "Fixed". The data for a... See more...
We pull weekly vulnerability reports from Splunk associated with our Qualys data.  I am trying to filter out all records associated with a hostname if the status field equals "Fixed". The data for a couple hosts might look like this:   Date Host Status 2024-07-22 host1 NEW 2024-07-22 host2 NEW 2024-07-23 host1 ACTIVE 2024-07-23 host2 ACTIVE 2024-07-24 host1 ACTIVE 2024-07-24 host2 ACTIVE 2024-07-25 host1 FIXED 2024-07-25 host2 ACTIVE 2024-07-26 host2 ACTIVE 2024-07-27 host2 ACTIVE 2024-07-28 host2 ACTIVE 2024-07-29 host2 ACTIVE   Both host1 and host2 discover a new vulnerability on 7-22.  On 7-23, the status for both flip to "ACTIVE".  On 7-25, however, host1 is now showing a FIXED status. Host2 remains vulnerable through the remaining date range of the report. Since host1 fixed the vulnerability during the timeframe, how could I go about removing all host1 events based on the status field being equal to "Fixed" on the most recent data pull?
I checked splunkd.log and and found the error::: The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.785 -0500 WARN DiskMon [2047... See more...
I checked splunkd.log and and found the error::: The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.785 -0500 WARN DiskMon [2047349 indexerPipe] - MinFreeSpace=5000. The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.969 -0500 ERROR Logger [2047618 HttpDedicatedIoThread-3] - Error writing to "/opt/splunk/var/log/splunk/splunkd_access.log": No space left on device help me in resolving this
 I checked, python.log is not throwing any errors and something like this : <{"name": "/en-US/static/@8FAC62B5BA1D0F7D9B1061086E2CF5B3713CA4C1EA33ED54B8F3EAA6F4E20FF2/fonts/splunkicons-regular-webfo... See more...
 I checked, python.log is not throwing any errors and something like this : <{"name": "/en-US/static/@8FAC62B5BA1D0F7D9B1061086E2CF5B3713CA4C1EA33ED54B8F3EAA6F4E20FF2/fonts/splunkicons-regular-webfont.woff", "hqp": false, "redirect": 0, "dns": 0, "tcp": 0, "blocked": 9, "request": 23, "response": 1, "processing": null, "load": null, "ts": 13362}, {"name": "long-animation-frame", "hqp": false, "redirect": null, "dns": null, "tcp": null, "blocked": null, "request": null, "response": null, "processing": null, "load": null, "ts": null}, {"name": "long-animation-frame", "hqp": false, "redirect": null, "dns": null, "tcp": null, "blocked": null, "request": null, "response": null, "processing": null, "load": null, "ts": null}> search.log has <ERROR SearchMessages - orig_component="script" app="search" sid="1722375466.3" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'test' returned error code 1.> The model I am using is LLM, Can Splunk able to load the large language models? 
After later review,  the rebuild of the host did not fix the problem.  In fact, this was more widespread of a problem than I had indicated.  I'd say 3-5% of my hosts were seeing this problem. I foun... See more...
After later review,  the rebuild of the host did not fix the problem.  In fact, this was more widespread of a problem than I had indicated.  I'd say 3-5% of my hosts were seeing this problem. I found the releasenotes for the latest version of the Splunk_TA_nix addon.  Once I downloaded/installed this version of the addon, new records starting appearing. https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Releasenotes    
We are in the midst of migrating our Splunk from Ubuntu to RHEL 9 Stigged. Supposedly, this should have been copy paste, minor clean up, start Splunk. it has been anything but that. Error after error... See more...
We are in the midst of migrating our Splunk from Ubuntu to RHEL 9 Stigged. Supposedly, this should have been copy paste, minor clean up, start Splunk. it has been anything but that. Error after error and problem after problem. I finally have it narrowed down to a certificate error; an error that doesn't really make sense since everything worked fine on the other server. The server name and IP address are the same as the old server. They weren't set that way initially since I was using rsync to move things but after a couple errors, I realized I needed to fix that. I set the IP and hostname to the same as the old one, powered off the old server, and rebooted the new one to make sure settings were correct.  Currently, I'm seeing a bunch of python errors in splunkd.log.    Error ExecProcessor -message from "<splunkhome>/bin/python3.7 /<splunkhome>/etc/apps/search/bin/quarantine_files.py" from splunk.quarantine_files.configs import get_all_configs.   This error reports dozens of times but changes the file from. Eventually it ends with   ERROR ExecProcessor [33034 ExecProcessor] - message from "/<splunkhome>/bin/python3.7 /<splunkhome>/etc/apps/search/bin/quarantine_files.py" MemoryError IndexProcessor [32848 MainThread] - handleSignal : Disabling streaming searches. IndexProcessor [32848 MainThread] - request state change from=RUN to=SHUTDOWN_SIGNALED ProcessRunner [32850 ProcessRunner] - Unexpected EOF from process runner child! ProcessRunner [32850 ProcessRunner] -helper process seems to have died (child killed by signal 15: Terminated) !     In mongod.log I get "error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: unsupported certificate purpose. Ending connection from <server IP>". When running splunk start, it says waiting for web server..... and eventually fails saying web interface does not seem to be available. I'm unclear if this or the python errors causes Splunk to stop but within about 10 minutes of starting Splunk, it stops. Previous to this, it was giving me an error about the kvstore (I don't remember it). After some digging, I discovered that I also had to add a section to the server.conf for kvstore to use a certificate. That's apparently a requirement when running FIPS... Except another RHEL system runs it fine without the certificate... The next error was about the hostname not matching the cert. The hostname it listed was 127.0.0.1 which is not what the hostname is set to. I manually set it to the server IP with SPLUNK_BINDIP in splunk-launch.conf and that cleared that issue but it still doesn't load the web page and I now get the certificate error. It feels like it's connecting to itself as a client for some reason and failing the certificate. Did I configure something wrong? We've never had to set that manually so I found it odd that we to when moving to RHEL. I found another post that indicated I could check if the certificate was a server and client with the -purpose. Unfortunately (and unsurprisingly) it's server only. I have been trying to figure out either A) how to get it to stop asking for the client certificate or B) how to create a certificate that acts as both server and client... or I guess C) how to create the client certificate and where to place it. All of our certs are probably server only. We have our own CA so we aren't doing self-signed. Any thoughts? Any tips on any of these issues would be appreciated.
So in theory doable but practically ridiculous, gotcha
I just have to know, did a rollback solve the issue or does it persist? 
First, do not reach for inputlookup when lookup is readily applicable.  Assuming that you have distinct names in indexed events and the lookup, all you need to do is index=buildings_core | lookup ro... See more...
First, do not reach for inputlookup when lookup is readily applicable.  Assuming that you have distinct names in indexed events and the lookup, all you need to do is index=buildings_core | lookup roomlookup_buildings.csv building_from_search2 as building_from_search1 output building_from_search2 | where isnull(building_from_search2) | stats values(building_from_search1) as unmatched_buildings_from_search1 by request_unique_id Here, I assume that you have already defined a lookup called roomlookup_buildings.csv. (I usually name my lookups without that .csv.) But then, why would you have different field names for the same thing?  If in both indexed events and lookup the field name is building, you can do index=buildings_core | lookup roomlookup_buildings.csv building output building as matching_building | where isnull(matching_building) | stats values(building) as unique_buildings by request_unique_id Hope this helps.
I didn't have anything specific in mind. More like "that's where I'd look first and see if anything seems off". What's the lispy search for such a lookup-value-based search?
I am trying to ingest some json data into a new Splunk Cloud instance, with a custom sourcetype, but I keep getting duplicate data in the search results. This seems to be an extremely common problem,... See more...
I am trying to ingest some json data into a new Splunk Cloud instance, with a custom sourcetype, but I keep getting duplicate data in the search results. This seems to be an extremely common problem, based on the number of old posts, but none of them seem to address the Cloud version. I have a JSON file that looks like this: { "RowNumber": 1, "ApplicationName": "177525278", "ClientProcessID": 114889, "DatabaseName": "1539703986", "StartTime": "2024-07-30 12:15:13" }   I have a Windows 2022 server with a 9.2.2 universal forwarder installed. I manually added a very simple app to the C:\Program Files\SplunkUniversalForwarder\etc\apps folder. inputs.conf contains this monitor [batch://C:\splunk_test_files\ErrorMaster\*.json] move_policy=sinkhole index=centraladmin_errormaster sourcetype=errormaster props.conf contains this type  (copied from _json) [errormaster] pulldown_type = true INDEXED_EXTRACTIONS = json KV_MODE = none category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ On the cloud side I created (from the UI) a new sourcetype called 'errormaster' as just a direct clone of the existing _json type. When I add a .json file to the folder, it is ingested and the events show up in the cloud instance, under the right correct centraladmin_errormaster index, and with the sourcetype=errormaster. However, the fields all have duplicate values.   If it switch it to the built-in _json type it works fine. I have some field extractions I want to add, which is why I wanted a custom type. I'm guessing this is something obvious to the Cloud experts, but I am an accidental Splunk Admin with very little experience, so any help you can offer would be appreciated.
Hi @Mario.Morelli , Thanks for the clarification. Have a great day! Regards
Hi there, now I'm trying some of escu's built-in rules and sending them as notable alerts and via msteams webhooks. However, from the built-in query, only a few fields can be sent to the webhook aler... See more...
Hi there, now I'm trying some of escu's built-in rules and sending them as notable alerts and via msteams webhooks. However, from the built-in query, only a few fields can be sent to the webhook alert as shown in the capture below.   Is it possible to enrich this information with some information like in the annotation section?    
Check search.log and python.log for any messages that might explain why the script returned an error code.
Hi Splunk Community, I have a query that retrieves building data from two sources and I need assistance in identifying the unique buildings that are present in buildings_from_search1 but not in bui... See more...
Hi Splunk Community, I have a query that retrieves building data from two sources and I need assistance in identifying the unique buildings that are present in buildings_from_search1 but not in buildings_from_search2. Here's the query I'm currently using: index=buildings_core |stats values(building_from_search1) as buildings_from_search1 by request_unique_id | append [ | inputlookup roomlookup_buildings.csv | stats values(building_from_search2) as buildings_from_search2 ] Could someone please guide me on how to modify this query to get the unique buildings that are only present in buildings_from_search1 and not in buildings_from_search2? Thank you in advance for your help!
Recently upgraded to 9.2.2 and Historic License Usage panels in the Monitoring Console are now broken. The panels in License Usage - Today still work. Most answers I found had to do with the licen... See more...
Recently upgraded to 9.2.2 and Historic License Usage panels in the Monitoring Console are now broken. The panels in License Usage - Today still work. Most answers I found had to do with the license manager not indexing or reading the usage data. But on the license manager the panels all work in Settings » Licensing » License Usage Reporting.  The suggestion in this post did not apply either: https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Historical-license-usage-page-showing-blank/m-p/635412  
Hi, you will only need one database agent as it can be used to connect to a lot of db's, as long as that agent can communicate to all the databases to monitor it. However licensing will consume 6 lic... See more...
Hi, you will only need one database agent as it can be used to connect to a lot of db's, as long as that agent can communicate to all the databases to monitor it. However licensing will consume 6 licenses as you are monitoring 6 instances
We got flagged for an OpenSSL1.0.2f vulnerability on our SOAR instance within the default installed Splunk Universal Forwarder path. It seems in later UF versions this vulnerability is remediated. I'... See more...
We got flagged for an OpenSSL1.0.2f vulnerability on our SOAR instance within the default installed Splunk Universal Forwarder path. It seems in later UF versions this vulnerability is remediated. I'm wondering what version of UF comes in the latest SOAR 6.2.2 version?
When ingesting Microsoft Azure data, we see different time formats for different Azure categories, and I wonder how to parse it correctly? Both timezones seem to be UTC. Is the proper approach to set... See more...
When ingesting Microsoft Azure data, we see different time formats for different Azure categories, and I wonder how to parse it correctly? Both timezones seem to be UTC. Is the proper approach to set  TZ=UTC and specify in datetime.xml the two formats? { category: NonInteractiveUserSignInLogs time: 2024-07-30T18:02:42.0324621Z . . . } { category: RiskyUsers time: 7/30/2024 1:48:56 PM . . . }
Thank you, @PickleRick. What would I look for specifically? I've reviewed the search.log extensively, and nothing seems to point to the problem. I should also mention that I created a support case w... See more...
Thank you, @PickleRick. What would I look for specifically? I've reviewed the search.log extensively, and nothing seems to point to the problem. I should also mention that I created a support case with Splunk, but I wanted to post on Splunk Community because often the community solves my problem.