All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a search that captures a specific product code, calculates the total number of units attributed to the product code that were sold over the timeframe assigned to the search (say, seven days), ... See more...
I have a search that captures a specific product code, calculates the total number of units attributed to the product code that were sold over the timeframe assigned to the search (say, seven days), and presents the data as a timechart. The search also references a lookup table to provide a "human friendly" name that matches the product code, along with a baseline value which reflects a specific number of units expected to be sold each day. The search looks like this... sourcetype=foo index=bar item="productA" | lookup product_table.csv productCode AS item | timechart span=1d sum(product_count) as volume by item | eval baseline = max(dailyBaseline)   ...and the lookup table looks like this: productCode name dailyBaseline productA Widget 5000 productB Thingamajig 10000   I would like the ensuing chart to show the baseline I've defined in the lookup table. If I replace "max(dailyBaseline)" in the eval statement with a static value such as 5000, it works fine. I can't figure out how to get the "dailyBaseline" value from the lookup table to work, though.
Hello and thanks for bringing your question here. There are some ways to control user session. This page has some example code. https://docs.appdynamics.com/appd/24.x/24.7/en/end-user-monitoring/m... See more...
Hello and thanks for bringing your question here. There are some ways to control user session. This page has some example code. https://docs.appdynamics.com/appd/24.x/24.7/en/end-user-monitoring/mobile-real-user-monitoring/instrument-ios-applications/customize-the-ios-instrumentationGo%20to%20the%20Example%20of%20a%20Programmatically%20Controlled%20SessionLink%20to%20Example%20of%20a%20Programmatically%20Controlled%20Session. Basically sessions by default starts when your app starts, and close when the user closes the app. During this period, we capture user activity events like user clicks, touches and etc everything is recorded and attached to your session. The idea is to track an entire user interaction block. For example, i open my bank app (session starts), go to balance, check the value and close the app. This is a very simple session and it contains maybe a single click. If i do this again tomorrow, you will get a new session. I like to think as "Sessions per Agent' as "Sessions per device". hope it helps!
I am working on ingesting the WSJT-X log. I got to where I have the basic fields in Splunk and wanted to create a date and time stamp from the poorly formatted data.  I started with a very basic eval... See more...
I am working on ingesting the WSJT-X log. I got to where I have the basic fields in Splunk and wanted to create a date and time stamp from the poorly formatted data.  I started with a very basic eval statement to test this and I am not seeing the new field. So, what did I miss?   I created the following: transforms.conf [wsjtx_log] REGEX = (\d{2})(\d{2})(\d{2})_(\d{2})(\d{2})(\d{2})\s+(\d+\.\d+)\s+(\w+)\s+(\w+)\s+(\d+|-\d+)\s+(-\d+\.\d+|\d+\.\d+)\s+(\d+)\s+(.+) FORMAT = year::$1 month::$2 day::$3 hour::$4 min::$5 sec::$6 freqMhz::$7 action::$8 mode::$9 rxDB::$10 timeOffset::$11 freqOffSet::$12 remainder::$13 [add20] INGEST_EVAL = fyear="20" . $year$ props.conf [wsjtx_log] REPORT-wsjtx_all = wsjtx_log TRANSFORMS = add20 fields.conf fyear] INDEXED = TRUE
I'm using the splunk search API to convert search job to csv and downloading it. Unfortunately in this case splunk does not enclose values in double quotes. /api/search/jobs/$alert_data$/results?isD... See more...
I'm using the splunk search API to convert search job to csv and downloading it. Unfortunately in this case splunk does not enclose values in double quotes. /api/search/jobs/$alert_data$/results?isDownload=true&timeFormat=%FT%T.%Q%3Az&maxLines=0&count=0&filename=alert_data&outputMode=csv
This is a little late, but I was interested in doing the same thing as OP. Looking at ES dashabord source it looks like key indicators visualization is custom. Make me question using it in my own das... See more...
This is a little late, but I was interested in doing the same thing as OP. Looking at ES dashabord source it looks like key indicators visualization is custom. Make me question using it in my own dashboard. Is that a good idea?  
My apologies for not reading the question carefully.  eventstats is your friend.   | eventstats values(Status) by Host | where NOT "FIXED" IN ('values(Status)') | fields - "values(Status)"   Here... See more...
My apologies for not reading the question carefully.  eventstats is your friend.   | eventstats values(Status) by Host | where NOT "FIXED" IN ('values(Status)') | fields - "values(Status)"   Here, I am breaking out of my usual pattern to use a semantic filter.  For economy, you can also use the side effect of Splunk's equality on multivalue:   | eventstats values(Status) by Host | where 'values(Status)' != "FIXED" | fields - "values(Status)"   Either way, you get Date Host Status 2024-07-22 host2 NEW 2024-07-23 host2 ACTIVE 2024-07-24 host2 ACTIVE 2024-07-25 host2 ACTIVE 2024-07-26 host2 ACTIVE 2024-07-27 host2 ACTIVE 2024-07-28 host2 ACTIVE 2024-07-29 host2 ACTIVE Here is an emulation you can play with and compare with real data   | makeresults format=csv data="Date, Host, Status 2024-07-22, host1, NEW 2024-07-22, host2, NEW 2024-07-23, host1, ACTIVE 2024-07-23, host2, ACTIVE 2024-07-24, host1, ACTIVE 2024-07-24, host2, ACTIVE 2024-07-25, host1, FIXED 2024-07-25, host2, ACTIVE 2024-07-26, host2, ACTIVE 2024-07-27, host2, ACTIVE 2024-07-28, host2, ACTIVE 2024-07-29, host2, ACTIVE" ``` data emulation above ```  
This isn't what I'm looking for. This eliminates just the event with a fixed status. I need to remove all of the host1 events because one of them has a status of fixed.
I figured out that is not causing the issue, now my disk space is 2GB and MinFreeSpace 500mb.  Also, I am loading huggingface models in splunk/apps/myapp/bin/user-queries.py file.  When I hardcoded ... See more...
I figured out that is not causing the issue, now my disk space is 2GB and MinFreeSpace 500mb.  Also, I am loading huggingface models in splunk/apps/myapp/bin/user-queries.py file.  When I hardcoded output without loading model, it worked. also, I am able to see response from my model for the query on the terminal.   The recent errors in splunkd.log are: 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimGroup 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimGroup 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimGroups 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimGroups 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimUser 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimUser 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimUsers 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimUsers 07-30-2024 18:23:29.998 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: reading standard input 07-30-2024 18:23:29.998 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: 2024/07/30 18:23:30 [begin] SIGUSR1 handler 07-30-2024 18:23:30.002 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: 2024/07/30 18:23:30 Supervisor logs printed at : /opt/splunk/var/log/splunk 07-30-2024 18:23:30.098 -0500 ERROR NoahHeartbeat [3255712 SplunkdSpecificInitThread] - event=deleteReceipts Not supported, since no remote queue is configured in inputs.conf 07-30-2024 18:23:30.098 -0500 ERROR NoahHeartbeat [3255712 SplunkdSpecificInitThread] - event=deleteReceipts message="Unable to load smartbus conf" 07-30-2024 18:23:30.110 -0500 ERROR loader [3255846 datalakeinput] - Couldn't find library for: datalakeinputprocessor 07-30-2024 18:23:30.110 -0500 ERROR pipeline [3255846 datalakeinput] - Couldn't find library for: datalakeinputprocessor 07-30-2024 18:23:30.110 -0500 ERROR PipelineComponent [3255548 MainThread] - The pipeline datalakeinput threw an exception during initialize 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimGroup 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimGroup 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimGroups 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimGroups 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimUser 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimUser 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimUsers 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimUsers
The disk is full.  The resolution is to free up space or add more storage. Indexes should not be allowed to grow large enough to fill the disk.  Indexes should be part of a volume that is smaller th... See more...
The disk is full.  The resolution is to free up space or add more storage. Indexes should not be allowed to grow large enough to fill the disk.  Indexes should be part of a volume that is smaller than the total disk space available (to allow for metadata).  That MinFreeSpace is 5GB and disk space is 0 tells me that Splunk is sharing storage with other applications and/or the OS.  That is a Bad Practice. The operating system, $SPLUNK_HOME, and $SPLUNK_DB should be on separate and independent mount points.
If you find issue happens after windows server is restarted. Restarting splunk universal forwarder fixes the issue. Then try one of the following workarounds. Use 'Delayed Start' for the Spl... See more...
If you find issue happens after windows server is restarted. Restarting splunk universal forwarder fixes the issue. Then try one of the following workarounds. Use 'Delayed Start' for the Splunk Forwarder service. ( https://community.splunk.com/t5/Getting-Data-In/Why-quot-FormatMessage-error-quot-appears-in-indexed...). However it's hard to configure thousands of DCs. Or Configure  interval as cron schedule instead. interval = [<decimal>|<cron schedule>]   [WinEventLog] interval=* * * * *   By default wineventlog interval is 60 sec. That means as soon as splunk is restarted, wineventlog (or any modinput) is immediately started. Subsequently every 60( configured interval) splunk checks if modinput is still running. If not re-launch modinput. Instead of setting interval 60 sec, if we use cron schedule to run every minute, then splunk is not going to launch modinput immediately. So essentially the idea is to convert interval setting from decimal to cron schedule to introduce a delay. If above does not solve the issue, meaning the issue is not related to windows server  restart.  Try one of the following workarounds.   Workaround 1. in inputs.conf Stop UF Set batch_size to 1. Start UF [<impacted channel>] batch_size=1 Workaround 2. in inputs.conf Stop UF backup checkpoint file $SPLUNK_HOME\var\lib\splunk\modinputs\WinEventLog\<impacted channel> . Saves current record id  In inputs.conf for impacted channel add use_old_eventlog_api=true Start UF Stop  UF after 30 sec Replace current record with  record id found in step 2. So that ingestion starts from right place. When setting use_old_eventlog_api=false ( in case if it does help). Follow all above steps so that UF correctly starts from desired record id.  [<impacted channel>] use_old_eventlog_api=true  
You probably could define TIME_PREFIX to find that timestamp.  However, is the timestamp present for every event or just once in the file?  If the latter, then start writing code to re-process that f... See more...
You probably could define TIME_PREFIX to find that timestamp.  However, is the timestamp present for every event or just once in the file?  If the latter, then start writing code to re-process that file into something Splunk can ingest more easily.
I finetuned LLM and I want to integrate that with Splunk. In Splunk Dashboard, I am going to include Question/Answering mechanism where my model going to answer the question user has.  For that, I... See more...
I finetuned LLM and I want to integrate that with Splunk. In Splunk Dashboard, I am going to include Question/Answering mechanism where my model going to answer the question user has.  For that, I have created an app "customApp" in src/etc/apps and added user-queries.py file in apps/customApp/bin folder and command.conf file in apps/myapp/default folder.  I loaded the finetuned model and generated response in user-queries.py  I am getting the following error on Splunk dashboard when calling this command. Btw, it worked when I hardcoded instead of loading the model and asking model to generate responses. <Error in 'userquery' command: External search command exited unexpectedly with non-zero error code 1.> Am I following the correct approach to integrate LLM to Splunk? I checked my splunkd.logs, it shows some disk space issue. Can you someone please save me from this, Thanks in advance!
<something> Status!=FIXED
We pull weekly vulnerability reports from Splunk associated with our Qualys data.  I am trying to filter out all records associated with a hostname if the status field equals "Fixed". The data for a... See more...
We pull weekly vulnerability reports from Splunk associated with our Qualys data.  I am trying to filter out all records associated with a hostname if the status field equals "Fixed". The data for a couple hosts might look like this:   Date Host Status 2024-07-22 host1 NEW 2024-07-22 host2 NEW 2024-07-23 host1 ACTIVE 2024-07-23 host2 ACTIVE 2024-07-24 host1 ACTIVE 2024-07-24 host2 ACTIVE 2024-07-25 host1 FIXED 2024-07-25 host2 ACTIVE 2024-07-26 host2 ACTIVE 2024-07-27 host2 ACTIVE 2024-07-28 host2 ACTIVE 2024-07-29 host2 ACTIVE   Both host1 and host2 discover a new vulnerability on 7-22.  On 7-23, the status for both flip to "ACTIVE".  On 7-25, however, host1 is now showing a FIXED status. Host2 remains vulnerable through the remaining date range of the report. Since host1 fixed the vulnerability during the timeframe, how could I go about removing all host1 events based on the status field being equal to "Fixed" on the most recent data pull?
I checked splunkd.log and and found the error::: The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.785 -0500 WARN DiskMon [2047... See more...
I checked splunkd.log and and found the error::: The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.785 -0500 WARN DiskMon [2047349 indexerPipe] - MinFreeSpace=5000. The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.969 -0500 ERROR Logger [2047618 HttpDedicatedIoThread-3] - Error writing to "/opt/splunk/var/log/splunk/splunkd_access.log": No space left on device help me in resolving this
 I checked, python.log is not throwing any errors and something like this : <{"name": "/en-US/static/@8FAC62B5BA1D0F7D9B1061086E2CF5B3713CA4C1EA33ED54B8F3EAA6F4E20FF2/fonts/splunkicons-regular-webfo... See more...
 I checked, python.log is not throwing any errors and something like this : <{"name": "/en-US/static/@8FAC62B5BA1D0F7D9B1061086E2CF5B3713CA4C1EA33ED54B8F3EAA6F4E20FF2/fonts/splunkicons-regular-webfont.woff", "hqp": false, "redirect": 0, "dns": 0, "tcp": 0, "blocked": 9, "request": 23, "response": 1, "processing": null, "load": null, "ts": 13362}, {"name": "long-animation-frame", "hqp": false, "redirect": null, "dns": null, "tcp": null, "blocked": null, "request": null, "response": null, "processing": null, "load": null, "ts": null}, {"name": "long-animation-frame", "hqp": false, "redirect": null, "dns": null, "tcp": null, "blocked": null, "request": null, "response": null, "processing": null, "load": null, "ts": null}> search.log has <ERROR SearchMessages - orig_component="script" app="search" sid="1722375466.3" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'test' returned error code 1.> The model I am using is LLM, Can Splunk able to load the large language models? 
After later review,  the rebuild of the host did not fix the problem.  In fact, this was more widespread of a problem than I had indicated.  I'd say 3-5% of my hosts were seeing this problem. I foun... See more...
After later review,  the rebuild of the host did not fix the problem.  In fact, this was more widespread of a problem than I had indicated.  I'd say 3-5% of my hosts were seeing this problem. I found the releasenotes for the latest version of the Splunk_TA_nix addon.  Once I downloaded/installed this version of the addon, new records starting appearing. https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Releasenotes    
We are in the midst of migrating our Splunk from Ubuntu to RHEL 9 Stigged. Supposedly, this should have been copy paste, minor clean up, start Splunk. it has been anything but that. Error after error... See more...
We are in the midst of migrating our Splunk from Ubuntu to RHEL 9 Stigged. Supposedly, this should have been copy paste, minor clean up, start Splunk. it has been anything but that. Error after error and problem after problem. I finally have it narrowed down to a certificate error; an error that doesn't really make sense since everything worked fine on the other server. The server name and IP address are the same as the old server. They weren't set that way initially since I was using rsync to move things but after a couple errors, I realized I needed to fix that. I set the IP and hostname to the same as the old one, powered off the old server, and rebooted the new one to make sure settings were correct.  Currently, I'm seeing a bunch of python errors in splunkd.log.    Error ExecProcessor -message from "<splunkhome>/bin/python3.7 /<splunkhome>/etc/apps/search/bin/quarantine_files.py" from splunk.quarantine_files.configs import get_all_configs.   This error reports dozens of times but changes the file from. Eventually it ends with   ERROR ExecProcessor [33034 ExecProcessor] - message from "/<splunkhome>/bin/python3.7 /<splunkhome>/etc/apps/search/bin/quarantine_files.py" MemoryError IndexProcessor [32848 MainThread] - handleSignal : Disabling streaming searches. IndexProcessor [32848 MainThread] - request state change from=RUN to=SHUTDOWN_SIGNALED ProcessRunner [32850 ProcessRunner] - Unexpected EOF from process runner child! ProcessRunner [32850 ProcessRunner] -helper process seems to have died (child killed by signal 15: Terminated) !     In mongod.log I get "error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: unsupported certificate purpose. Ending connection from <server IP>". When running splunk start, it says waiting for web server..... and eventually fails saying web interface does not seem to be available. I'm unclear if this or the python errors causes Splunk to stop but within about 10 minutes of starting Splunk, it stops. Previous to this, it was giving me an error about the kvstore (I don't remember it). After some digging, I discovered that I also had to add a section to the server.conf for kvstore to use a certificate. That's apparently a requirement when running FIPS... Except another RHEL system runs it fine without the certificate... The next error was about the hostname not matching the cert. The hostname it listed was 127.0.0.1 which is not what the hostname is set to. I manually set it to the server IP with SPLUNK_BINDIP in splunk-launch.conf and that cleared that issue but it still doesn't load the web page and I now get the certificate error. It feels like it's connecting to itself as a client for some reason and failing the certificate. Did I configure something wrong? We've never had to set that manually so I found it odd that we to when moving to RHEL. I found another post that indicated I could check if the certificate was a server and client with the -purpose. Unfortunately (and unsurprisingly) it's server only. I have been trying to figure out either A) how to get it to stop asking for the client certificate or B) how to create a certificate that acts as both server and client... or I guess C) how to create the client certificate and where to place it. All of our certs are probably server only. We have our own CA so we aren't doing self-signed. Any thoughts? Any tips on any of these issues would be appreciated.
So in theory doable but practically ridiculous, gotcha
I just have to know, did a rollback solve the issue or does it persist?