All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to expose data within a lookup from a "logic" app to a "presentation" app for users that have the "user" role. To simplify the situation, I have a lookup "lookup_file.csv" with co... See more...
Hello, I am trying to expose data within a lookup from a "logic" app to a "presentation" app for users that have the "user" role. To simplify the situation, I have a lookup "lookup_file.csv" with corresponding lookup deinition "lookup_file" in the "logic" app. Both knowledge objects have "global" sharing and permissions set to "read" for the "user" role. Since I am admin I gave "read/write" permissions to the "admin" role. When I run the search "| inputlookup lookup_file" from the "presentation" app with my admin user I have no issues reading the data. When I run the same command with my user that has the "user" role assigned I get two errors: 1. The lookup table ‘lookup_file' is invalid. 2. The lookup table ‘lookup_file' requires a .csv or KV store lookup definition. Here is a diagram that explains the situation: I have tried many configurations but cannot get the data to load in the "presentation" app with a user that has the "user" role.  What am I missing? Any help would be greatly appreciated! Best regards, Andrew
Dear all, We are trying to install an AppDynamics agent in a Kubernetes cluster. We have deployed successfully appdynamics-operator, but when we deploy cluster-agent (according to manuals ) we get... See more...
Dear all, We are trying to install an AppDynamics agent in a Kubernetes cluster. We have deployed successfully appdynamics-operator, but when we deploy cluster-agent (according to manuals ) we get an error; error":"WATCH_NAMESPACE must be set","stacktrace":"operator-release/appdynamics-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tappdynamics-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main Below yaml file: Please help me out Best regards,
I get below result when use Chart count over field-A by Field-B We can see there are cell with value 0, is there any solution to replace these 0 with SPACE? Thanks. Over field value by field... See more...
I get below result when use Chart count over field-A by Field-B We can see there are cell with value 0, is there any solution to replace these 0 with SPACE? Thanks. Over field value by field value1 by field value2 by field value3 by field value 4 by field value5 Total Over value 1 0 0 1 0 0 1 Over value 2 0 0 0 603 0 603 Over value 3 0 0 12 0 0 12 Over value 4 0 0 0 600 0 600
Hi! I've set up the following "app" to be delployed on my Universal Forwarders for windows: "[WinEventLog://Microsoft-Windows-Windows Defender/Operational] index = windefender disabled = fals... See more...
Hi! I've set up the following "app" to be delployed on my Universal Forwarders for windows: "[WinEventLog://Microsoft-Windows-Windows Defender/Operational] index = windefender disabled = false evt_resolve_ad_obj = 1" This has worked flawlessly for years until this week when I started to NOT receive any updates from that log until restart of the Universal Forwarder. At first I thought it had something to do with that we had updated all UFs to 8.2.2 too but today when I did some investigation I also noticed that one of the UF wasn't updated and still used version 7.2. So my guess is that it has something to to with the splunk enterprise installation/upgrade (upgraded to 8.2.2 for about 1½weeks ago. from 7.4). Its not that the forwarder stops completely because I still receive logging from the Security, System etc. logs in the event viewer. It seems to just be the "defender" log and when I do a restart of the splunk service it will start to send again. Have I missed something or should I put an ticket to splunk?
Hi, I have a firewall log in which some of the destinations do not have SNI, but I have their IPs. I want to create/extract a new field from destination to get the destination details, for example ... See more...
Hi, I have a firewall log in which some of the destinations do not have SNI, but I have their IPs. I want to create/extract a new field from destination to get the destination details, for example the Resolve Host or the Organization. Can someone please advise if this is possible and how? Thank you in advance.
I have installed the app in the local splunk using the Splunk REST API endpoint https://<host>:<port>/services/apps/local but when I tried to install the app in remote splunk .It gave error like ... See more...
I have installed the app in the local splunk using the Splunk REST API endpoint https://<host>:<port>/services/apps/local but when I tried to install the app in remote splunk .It gave error like "Cannot perform action "POST" without a target name to act on"  or "Unparsable URI-encoded request data"   I included the name(.tar or .spl file) and filename like how it is mention in the splunk docs   https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/RESTREF/RESTapps#apps.2Flocal   Using this endpoint .I can create the app in remote splunk but I want to upload the app    I tried with requests module and postman. The same error came   is there any way to install the app in the remote splunk using postman or requests?   Thank you in advance...  
I searched if someone had done this already but haven't found a good solution. So I wrote my own and thought I'd share it Sometimes you get some stats results which include columns that have null... See more...
I searched if someone had done this already but haven't found a good solution. So I wrote my own and thought I'd share it Sometimes you get some stats results which include columns that have null values in all rows. It's a typical result of | rest calls if you're trying to list some splunk objects. It's not that uncommon that out of several dozens or even hundreds of columns you get in your results, many of them are completely empty. So I thought I'd clean the results so they're easier to browse (and a bit lighter on the internet browser you're using).
I have the following SPL and I want to show table below. The value of Total must be equal to count of events (1588).  How can I pur the total count of events into Total variable? index=abc  | sta... See more...
I have the following SPL and I want to show table below. The value of Total must be equal to count of events (1588).  How can I pur the total count of events into Total variable? index=abc  | stats count as Count by reason_code | where reason_code != "false" | addtotals col=t labelfield=reason_code label="Retrieval task cancelled" fieldname="Percentage" | eval "Percentage"= round((Count/Total) * 100,2)."%"
Hello Team, Do Synthetic private agents support windows 10 machines? In my organization, there are around 60+ private locations from where we perform Synthetic monitoring.  Thanks  Kunal
Hello I'm trying to capture the ip address from the PXE log example shown. I want to also trim any preceding 0 so I can use the ip as an index. I feel I'm pretty close on this one. Log sample: Op... See more...
Hello I'm trying to capture the ip address from the PXE log example shown. I want to also trim any preceding 0 so I can use the ip as an index. I feel I'm pretty close on this one. Log sample: Operation: BootRequest (1) Addr type: 1 Addr Len: 6 Hop Count: 0 ID: 0001E240 Sec Since Boot: 65535 Client IP: 018.087.789.006 Your IP: 000.000.000.000 Server IP: 178.187.178.874 Relay Agent IP: 000.000.000.000 Addr: 87:f3:78:a5:78:b2: Magic Cookie: 63878263 Splunk Search: index="*********" source="D:\\SMS_DP$\\sms\\logs\\SMSPXE.log" | rex field=_raw "Addr: (?<Time>\d.{16})" | rex field=_raw "Addr: (?<PXE_MAC>\d.{16})" | rex field=_raw "Type=97 UUID: (?<PXE_UUID>\d.{33})" | rex field=_raw "Client IP: (?<PXE_IP>\d.{14})" | rex field=PXE_IP "^(?<PXE_IP_MOD>\b0+(\d+))" | rex field=_raw " date=\"(?<PXE_Date>\d.{9})" | rex field=_raw "><time=\"(?<PXE_Time>\d.{7})" | rex field=_raw "Type=53 Msg Type: (?<PXE_Traffic>\w.{4})" | rex field=_raw "Type=93 Client Arch: (?<PXE_Arch>\w.{3})" | where isnotnull(PXE_Traffic) | rename host as PXE_Host | table PXE_Host,PXE_Traffic,PXE_MAC,PXE_IP,PXE_IP_MOD,PXE_UUID,PXE_Arch,PXE_Date,PXE_Time | sort by PXE_Date, PXE_Time desc Regex: regex101: build, test, and debug regex
I'm having trouble working out how to authenticate to the Splunk Cloud ACS API using a local account. The doco suggests you can do this: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/Ad... See more...
I'm having trouble working out how to authenticate to the Splunk Cloud ACS API using a local account. The doco suggests you can do this: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/Admin/ConfigureIPAllowList I can hit the API successfully with a SAML token, but I need to be able to use a local account to authenticate right now. Can anyone shed some light on how you're meant to auth with a local account? I've tried using the session token provided by the auth/login endpoint, and also Basic auth (user/pass), but neither work.
Greetings, I am setting up a new 8.2.2 environment, Red Hat 8.1 and trying to get Splunk to start on boot and to run under a different user than root. I can start it up manually under the "splunk" u... See more...
Greetings, I am setting up a new 8.2.2 environment, Red Hat 8.1 and trying to get Splunk to start on boot and to run under a different user than root. I can start it up manually under the "splunk" user without any problems but on boot, it does not. What I have done so far: $SPLUNK_HOME/bin/splunk enable boot-start -user splunk in /etc/init.d/splunk #!/bin/sh RETVAL=0 . /etc/init.d/functions splunk_start() { echo Starting Splunk... su - splunk -c '"/opt/splunk/bin/splunk" start --no-prompt --answer-yes' RETVAL=$? [ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk } splunk_stop() { echo Stopping Splunk... su - splunk -c '"/opt/splunk/bin/splunk" stop' RETVAL=$? [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/splunk } splunk_restart() { echo Restarting Splunk... su - splunk -c '"/opt/splunk/bin/splunk" restart' RETVAL=$? [ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk } splunk_status() { echo Splunk status: su - splunk -c '"/opt/splunk/bin/splunk" status' RETVAL=$? } case "$1" in start) splunk_start ;; stop) splunk_stop ;; restart) splunk_restart ;; status) splunk_status ;; esac exit $RETVAL   in /opt/splunk/etc/splunk-launcher.conf # Version 8.2.2 # Modify the following line to suit the location of your Splunk install. # If unset, Splunk will use the parent of the directory containing the splunk # CLI executable. # SPLUNK_HOME=/opt/splunk # By default, Splunk stores its indexes under SPLUNK_HOME in the # var/lib/splunk subdirectory. This can be overridden # here: # # SPLUNK_DB=/opt/splunk-home/var/lib/splunk # Splunkd daemon name SPLUNK_SERVER_NAME=Splunkd # If SPLUNK_OS_USER is set, then Splunk service will only start # if the 'splunk [re]start [splunkd]' command is invoked by a user who # is, or can effectively become via setuid(2), $SPLUNK_OS_USER. # (This setting can be specified as username or as UID.) # # SPLUNK_OS_USER SPLUNK_OS_USER=splunk in sudoers splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk restart splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk stop splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk start splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk status   Could it be an issue with SELinux? Thanks in Advance John
Hi I've upload a file with chinese name,the content(which is also in chinese character)can display and query normally,but source name in chinese characters display with messy code in web browser. i... See more...
Hi I've upload a file with chinese name,the content(which is also in chinese character)can display and query normally,but source name in chinese characters display with messy code in web browser. i've changed browsers and changed the CHARSET in prof.conf, both didn't fix it. Dose anyone know how to solve the issue,thks a lot.        
Hello guys, I need help building the query for this value to group it like the output I have given below. Current: apple1 apple-orange apple-yellow banna123 banna-red banna-orange Output: a... See more...
Hello guys, I need help building the query for this value to group it like the output I have given below. Current: apple1 apple-orange apple-yellow banna123 banna-red banna-orange Output: apple* banna*
Hello, unfortunately I am having to attempt to do a restore of copies of old db_* and rb_* structures that were basically rsync'd over time to some cold storage. I am noticing that things like ".buck... See more...
Hello, unfortunately I am having to attempt to do a restore of copies of old db_* and rb_* structures that were basically rsync'd over time to some cold storage. I am noticing that things like ".bucketManifest" don't exist. I am trying to restore it to  a net new indexer cluster with the index configured in indexes.conf. I am happy to do this on a standalone indexer if that's the right way to do this, assuming that this is even possible.   To be clear, i have all of the directories that are prefixed with rb_* and db_*, but nothing else. *EDIT* I actually only have db_*/rawdata/journal.gz and rb_*/rawdata/journal.gz Thanks
When I try to use Splunk Add-on for Cisco Meraki for my Access Points I get this API error in the logs: meraki.exceptions.APIError: networks, getNetworkEvents - 400 Bad Request, {'errors': ['product... See more...
When I try to use Splunk Add-on for Cisco Meraki for my Access Points I get this API error in the logs: meraki.exceptions.APIError: networks, getNetworkEvents - 400 Bad Request, {'errors': ['productType is not applicable to this network']} My Meraki organization has three networks, and only one of them has productTypes = "wireless", so when the add-on iterates through my networks, it aborts when it hits a network that has no matching productType, and the add-on is unable to retrieve events from my wireless network. Please advise how to fix this. Thank you!
I'm following the Line Chart example in the Studio Dashboard app in Splunk index=_internal _sourcetype IN ( splunk_web_access, splunkd_access) | timechart count by _sourcetype "viz_ZrQCy9wp": { ... See more...
I'm following the Line Chart example in the Studio Dashboard app in Splunk index=_internal _sourcetype IN ( splunk_web_access, splunkd_access) | timechart count by _sourcetype "viz_ZrQCy9wp": { "type": "viz.line", "options": { "fieldColours": { "splunk_web_access": "#FF0000", "splunkd_access": "#0000FF" } },   I cannot get it to set the field name colours in a timechart.  I'm having an issue with that on other searches, as well as the 2nd y axis settings not appearing to work. Has something changed with how Splunk handles charts in Studio Dashboard? Thanks
Hi, We have a custom search that should alert when a critical host, that we have defined in the search, is missing. The issue we're having is that we haven't been alerted on some of the hosts not ha... See more...
Hi, We have a custom search that should alert when a critical host, that we have defined in the search, is missing. The issue we're having is that we haven't been alerted on some of the hosts not having logs because the last time they had any logs was an age ago. When I change earliest=-1d to earliest=-1y the hosts I want appear but the search takes a longer time.  Is there a way to make it so for every host value specified, a stats line is created where I can fillnull the fields as appropriate? Here is the search: | tstats prestats=true count where index="*", (host="host01" OR host="host02" OR host="host_01" OR host="host_02") earliest=-1d latest=now by index, host, _time span=1h | eval period=if(_time>relative_time(now(),"-1d"),"current","last") | chart count over host by period | eval missing=if(last>0 AND (isnull(current) OR current=0), 1, 0) | where missing=1 | sort - current, missing | rename current as still_logging, last as old_logs, missing as is_missing
I have quiz values for 10 quizzes. Each quiz is a column and the values are 0-100 in each row. I am trying to just calculate the the average of each column and have that as a point on the line chart... See more...
I have quiz values for 10 quizzes. Each quiz is a column and the values are 0-100 in each row. I am trying to just calculate the the average of each column and have that as a point on the line chart with 0-100 as the y-axis and each quiz as an x-axis column.  For example. | chart avg(quiz_01) AS "Quiz 1 Average", avg(quiz_02) AS "Quiz 2 Average", avg(quiz_03) AS "Quiz 3 Average" But all of the points end up in the same column in the line chart. Thanks  
I have a field with values like below (a) (a,b) (c) (a,c)   I am trying to parse these values, and get stats like below    a 3 b 1 c 2