All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do you mean something like this? sourcetype=foo index=bar item="productA" | lookup product_table.csv productCode AS item | timechart span=1d sum(product_count) as volume by name | lookup product_ta... See more...
Do you mean something like this? sourcetype=foo index=bar item="productA" | lookup product_table.csv productCode AS item | timechart span=1d sum(product_count) as volume by name | lookup product_table.csv name output dailyBaseline  
Hello,     I practice the PoC/PoV lab exercise under Black Belt Stage 2 course. When installed controller, it showed the disk storage needed is 5120 MB which is for Medium profile (5TB), instead the... See more...
Hello,     I practice the PoC/PoV lab exercise under Black Belt Stage 2 course. When installed controller, it showed the disk storage needed is 5120 MB which is for Medium profile (5TB), instead the chosen Demo profile (50GB). Anyone can give advice to my issue ? Thanks.           Jonathan Wang, 2024/07/31 Error messages: Task failed: Check if the required data directories have sufficient space on host: appd-server as user: root with message: The destination directory has insufficient disk space. You need a minimum of 51200 MB for installation.
Here's one way you can do it by adding this after your timechart command   | foreach * [ eval productCode="<<FIELD>>" | lookup baseline.csv productCode OUTPUT name dailyBaseline | eval {name}=<<FIE... See more...
Here's one way you can do it by adding this after your timechart command   | foreach * [ eval productCode="<<FIELD>>" | lookup baseline.csv productCode OUTPUT name dailyBaseline | eval {name}=<<FIELD>>, base_{name}=dailyBaseline | fields - productCode name dailyBaseline "<<FIELD>>" ]   after a split by clause the column name is the productCode, so the foreach * will iterate through the columns and create a field called dailyBaseline_productX for each found value. There will be other ways Edit: Added friendly name mapping
Given that your search is restricting your items to just one product (code), you could do the lookup after the timechart sourcetype=foo index=bar item="productA" | timechart span=1d sum(product_cou... See more...
Given that your search is restricting your items to just one product (code), you could do the lookup after the timechart sourcetype=foo index=bar item="productA" | timechart span=1d sum(product_count) as volume by item | eval item="productA" | lookup product_table.csv productCode AS item output dailyBaseline | fields - item
I have a search that captures a specific product code, calculates the total number of units attributed to the product code that were sold over the timeframe assigned to the search (say, seven days), ... See more...
I have a search that captures a specific product code, calculates the total number of units attributed to the product code that were sold over the timeframe assigned to the search (say, seven days), and presents the data as a timechart. The search also references a lookup table to provide a "human friendly" name that matches the product code, along with a baseline value which reflects a specific number of units expected to be sold each day. The search looks like this... sourcetype=foo index=bar item="productA" | lookup product_table.csv productCode AS item | timechart span=1d sum(product_count) as volume by item | eval baseline = max(dailyBaseline)   ...and the lookup table looks like this: productCode name dailyBaseline productA Widget 5000 productB Thingamajig 10000   I would like the ensuing chart to show the baseline I've defined in the lookup table. If I replace "max(dailyBaseline)" in the eval statement with a static value such as 5000, it works fine. I can't figure out how to get the "dailyBaseline" value from the lookup table to work, though.
Hello and thanks for bringing your question here. There are some ways to control user session. This page has some example code. https://docs.appdynamics.com/appd/24.x/24.7/en/end-user-monitoring/m... See more...
Hello and thanks for bringing your question here. There are some ways to control user session. This page has some example code. https://docs.appdynamics.com/appd/24.x/24.7/en/end-user-monitoring/mobile-real-user-monitoring/instrument-ios-applications/customize-the-ios-instrumentationGo%20to%20the%20Example%20of%20a%20Programmatically%20Controlled%20SessionLink%20to%20Example%20of%20a%20Programmatically%20Controlled%20Session. Basically sessions by default starts when your app starts, and close when the user closes the app. During this period, we capture user activity events like user clicks, touches and etc everything is recorded and attached to your session. The idea is to track an entire user interaction block. For example, i open my bank app (session starts), go to balance, check the value and close the app. This is a very simple session and it contains maybe a single click. If i do this again tomorrow, you will get a new session. I like to think as "Sessions per Agent' as "Sessions per device". hope it helps!
I am working on ingesting the WSJT-X log. I got to where I have the basic fields in Splunk and wanted to create a date and time stamp from the poorly formatted data.  I started with a very basic eval... See more...
I am working on ingesting the WSJT-X log. I got to where I have the basic fields in Splunk and wanted to create a date and time stamp from the poorly formatted data.  I started with a very basic eval statement to test this and I am not seeing the new field. So, what did I miss?   I created the following: transforms.conf [wsjtx_log] REGEX = (\d{2})(\d{2})(\d{2})_(\d{2})(\d{2})(\d{2})\s+(\d+\.\d+)\s+(\w+)\s+(\w+)\s+(\d+|-\d+)\s+(-\d+\.\d+|\d+\.\d+)\s+(\d+)\s+(.+) FORMAT = year::$1 month::$2 day::$3 hour::$4 min::$5 sec::$6 freqMhz::$7 action::$8 mode::$9 rxDB::$10 timeOffset::$11 freqOffSet::$12 remainder::$13 [add20] INGEST_EVAL = fyear="20" . $year$ props.conf [wsjtx_log] REPORT-wsjtx_all = wsjtx_log TRANSFORMS = add20 fields.conf fyear] INDEXED = TRUE
I'm using the splunk search API to convert search job to csv and downloading it. Unfortunately in this case splunk does not enclose values in double quotes. /api/search/jobs/$alert_data$/results?isD... See more...
I'm using the splunk search API to convert search job to csv and downloading it. Unfortunately in this case splunk does not enclose values in double quotes. /api/search/jobs/$alert_data$/results?isDownload=true&amp;timeFormat=%FT%T.%Q%3Az&amp;maxLines=0&amp;count=0&amp;filename=alert_data&amp;outputMode=csv
This is a little late, but I was interested in doing the same thing as OP. Looking at ES dashabord source it looks like key indicators visualization is custom. Make me question using it in my own das... See more...
This is a little late, but I was interested in doing the same thing as OP. Looking at ES dashabord source it looks like key indicators visualization is custom. Make me question using it in my own dashboard. Is that a good idea?  
My apologies for not reading the question carefully.  eventstats is your friend.   | eventstats values(Status) by Host | where NOT "FIXED" IN ('values(Status)') | fields - "values(Status)"   Here... See more...
My apologies for not reading the question carefully.  eventstats is your friend.   | eventstats values(Status) by Host | where NOT "FIXED" IN ('values(Status)') | fields - "values(Status)"   Here, I am breaking out of my usual pattern to use a semantic filter.  For economy, you can also use the side effect of Splunk's equality on multivalue:   | eventstats values(Status) by Host | where 'values(Status)' != "FIXED" | fields - "values(Status)"   Either way, you get Date Host Status 2024-07-22 host2 NEW 2024-07-23 host2 ACTIVE 2024-07-24 host2 ACTIVE 2024-07-25 host2 ACTIVE 2024-07-26 host2 ACTIVE 2024-07-27 host2 ACTIVE 2024-07-28 host2 ACTIVE 2024-07-29 host2 ACTIVE Here is an emulation you can play with and compare with real data   | makeresults format=csv data="Date, Host, Status 2024-07-22, host1, NEW 2024-07-22, host2, NEW 2024-07-23, host1, ACTIVE 2024-07-23, host2, ACTIVE 2024-07-24, host1, ACTIVE 2024-07-24, host2, ACTIVE 2024-07-25, host1, FIXED 2024-07-25, host2, ACTIVE 2024-07-26, host2, ACTIVE 2024-07-27, host2, ACTIVE 2024-07-28, host2, ACTIVE 2024-07-29, host2, ACTIVE" ``` data emulation above ```  
This isn't what I'm looking for. This eliminates just the event with a fixed status. I need to remove all of the host1 events because one of them has a status of fixed.
I figured out that is not causing the issue, now my disk space is 2GB and MinFreeSpace 500mb.  Also, I am loading huggingface models in splunk/apps/myapp/bin/user-queries.py file.  When I hardcoded ... See more...
I figured out that is not causing the issue, now my disk space is 2GB and MinFreeSpace 500mb.  Also, I am loading huggingface models in splunk/apps/myapp/bin/user-queries.py file.  When I hardcoded output without loading model, it worked. also, I am able to see response from my model for the query on the terminal.   The recent errors in splunkd.log are: 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimGroup 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimGroup 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimGroups 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimGroups 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimUser 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimUser 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimUsers 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimUsers 07-30-2024 18:23:29.998 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: reading standard input 07-30-2024 18:23:29.998 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: 2024/07/30 18:23:30 [begin] SIGUSR1 handler 07-30-2024 18:23:30.002 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: 2024/07/30 18:23:30 Supervisor logs printed at : /opt/splunk/var/log/splunk 07-30-2024 18:23:30.098 -0500 ERROR NoahHeartbeat [3255712 SplunkdSpecificInitThread] - event=deleteReceipts Not supported, since no remote queue is configured in inputs.conf 07-30-2024 18:23:30.098 -0500 ERROR NoahHeartbeat [3255712 SplunkdSpecificInitThread] - event=deleteReceipts message="Unable to load smartbus conf" 07-30-2024 18:23:30.110 -0500 ERROR loader [3255846 datalakeinput] - Couldn't find library for: datalakeinputprocessor 07-30-2024 18:23:30.110 -0500 ERROR pipeline [3255846 datalakeinput] - Couldn't find library for: datalakeinputprocessor 07-30-2024 18:23:30.110 -0500 ERROR PipelineComponent [3255548 MainThread] - The pipeline datalakeinput threw an exception during initialize 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimGroup 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimGroup 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimGroups 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimGroups 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimUser 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimUser 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimUsers 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimUsers
The disk is full.  The resolution is to free up space or add more storage. Indexes should not be allowed to grow large enough to fill the disk.  Indexes should be part of a volume that is smaller th... See more...
The disk is full.  The resolution is to free up space or add more storage. Indexes should not be allowed to grow large enough to fill the disk.  Indexes should be part of a volume that is smaller than the total disk space available (to allow for metadata).  That MinFreeSpace is 5GB and disk space is 0 tells me that Splunk is sharing storage with other applications and/or the OS.  That is a Bad Practice. The operating system, $SPLUNK_HOME, and $SPLUNK_DB should be on separate and independent mount points.
If you find issue happens after windows server is restarted. Restarting splunk universal forwarder fixes the issue. Then try one of the following workarounds. Use 'Delayed Start' for the Spl... See more...
If you find issue happens after windows server is restarted. Restarting splunk universal forwarder fixes the issue. Then try one of the following workarounds. Use 'Delayed Start' for the Splunk Forwarder service. ( https://community.splunk.com/t5/Getting-Data-In/Why-quot-FormatMessage-error-quot-appears-in-indexed...). However it's hard to configure thousands of DCs. Or Configure  interval as cron schedule instead. interval = [<decimal>|<cron schedule>]   [WinEventLog] interval=* * * * *   By default wineventlog interval is 60 sec. That means as soon as splunk is restarted, wineventlog (or any modinput) is immediately started. Subsequently every 60( configured interval) splunk checks if modinput is still running. If not re-launch modinput. Instead of setting interval 60 sec, if we use cron schedule to run every minute, then splunk is not going to launch modinput immediately. So essentially the idea is to convert interval setting from decimal to cron schedule to introduce a delay. If above does not solve the issue, meaning the issue is not related to windows server  restart.  Try one of the following workarounds.   Workaround 1. in inputs.conf Stop UF Set batch_size to 1. Start UF [<impacted channel>] batch_size=1 Workaround 2. in inputs.conf Stop UF backup checkpoint file $SPLUNK_HOME\var\lib\splunk\modinputs\WinEventLog\<impacted channel> . Saves current record id  In inputs.conf for impacted channel add use_old_eventlog_api=true Start UF Stop  UF after 30 sec Replace current record with  record id found in step 2. So that ingestion starts from right place. When setting use_old_eventlog_api=false ( in case if it does help). Follow all above steps so that UF correctly starts from desired record id.  [<impacted channel>] use_old_eventlog_api=true  
You probably could define TIME_PREFIX to find that timestamp.  However, is the timestamp present for every event or just once in the file?  If the latter, then start writing code to re-process that f... See more...
You probably could define TIME_PREFIX to find that timestamp.  However, is the timestamp present for every event or just once in the file?  If the latter, then start writing code to re-process that file into something Splunk can ingest more easily.
I finetuned LLM and I want to integrate that with Splunk. In Splunk Dashboard, I am going to include Question/Answering mechanism where my model going to answer the question user has.  For that, I... See more...
I finetuned LLM and I want to integrate that with Splunk. In Splunk Dashboard, I am going to include Question/Answering mechanism where my model going to answer the question user has.  For that, I have created an app "customApp" in src/etc/apps and added user-queries.py file in apps/customApp/bin folder and command.conf file in apps/myapp/default folder.  I loaded the finetuned model and generated response in user-queries.py  I am getting the following error on Splunk dashboard when calling this command. Btw, it worked when I hardcoded instead of loading the model and asking model to generate responses. <Error in 'userquery' command: External search command exited unexpectedly with non-zero error code 1.> Am I following the correct approach to integrate LLM to Splunk? I checked my splunkd.logs, it shows some disk space issue. Can you someone please save me from this, Thanks in advance!
<something> Status!=FIXED
We pull weekly vulnerability reports from Splunk associated with our Qualys data.  I am trying to filter out all records associated with a hostname if the status field equals "Fixed". The data for a... See more...
We pull weekly vulnerability reports from Splunk associated with our Qualys data.  I am trying to filter out all records associated with a hostname if the status field equals "Fixed". The data for a couple hosts might look like this:   Date Host Status 2024-07-22 host1 NEW 2024-07-22 host2 NEW 2024-07-23 host1 ACTIVE 2024-07-23 host2 ACTIVE 2024-07-24 host1 ACTIVE 2024-07-24 host2 ACTIVE 2024-07-25 host1 FIXED 2024-07-25 host2 ACTIVE 2024-07-26 host2 ACTIVE 2024-07-27 host2 ACTIVE 2024-07-28 host2 ACTIVE 2024-07-29 host2 ACTIVE   Both host1 and host2 discover a new vulnerability on 7-22.  On 7-23, the status for both flip to "ACTIVE".  On 7-25, however, host1 is now showing a FIXED status. Host2 remains vulnerable through the remaining date range of the report. Since host1 fixed the vulnerability during the timeframe, how could I go about removing all host1 events based on the status field being equal to "Fixed" on the most recent data pull?
I checked splunkd.log and and found the error::: The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.785 -0500 WARN DiskMon [2047... See more...
I checked splunkd.log and and found the error::: The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.785 -0500 WARN DiskMon [2047349 indexerPipe] - MinFreeSpace=5000. The diskspace remaining=0 is less than 1 x minFreeSpace on /opt/splunk/var/lib/splunk/_metrics/db 07-29-2024 11:16:16.969 -0500 ERROR Logger [2047618 HttpDedicatedIoThread-3] - Error writing to "/opt/splunk/var/log/splunk/splunkd_access.log": No space left on device help me in resolving this
 I checked, python.log is not throwing any errors and something like this : <{"name": "/en-US/static/@8FAC62B5BA1D0F7D9B1061086E2CF5B3713CA4C1EA33ED54B8F3EAA6F4E20FF2/fonts/splunkicons-regular-webfo... See more...
 I checked, python.log is not throwing any errors and something like this : <{"name": "/en-US/static/@8FAC62B5BA1D0F7D9B1061086E2CF5B3713CA4C1EA33ED54B8F3EAA6F4E20FF2/fonts/splunkicons-regular-webfont.woff", "hqp": false, "redirect": 0, "dns": 0, "tcp": 0, "blocked": 9, "request": 23, "response": 1, "processing": null, "load": null, "ts": 13362}, {"name": "long-animation-frame", "hqp": false, "redirect": null, "dns": null, "tcp": null, "blocked": null, "request": null, "response": null, "processing": null, "load": null, "ts": null}, {"name": "long-animation-frame", "hqp": false, "redirect": null, "dns": null, "tcp": null, "blocked": null, "request": null, "response": null, "processing": null, "load": null, "ts": null}> search.log has <ERROR SearchMessages - orig_component="script" app="search" sid="1722375466.3" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'test' returned error code 1.> The model I am using is LLM, Can Splunk able to load the large language models?