All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm running into a limitation with Splunk custom apps where I want the admin to be able to set some API key for my 3rd party app and I want everyone to have access to this secret in order to actually... See more...
I'm running into a limitation with Splunk custom apps where I want the admin to be able to set some API key for my 3rd party app and I want everyone to have access to this secret in order to actually run the custom commands that call the 3rd party API, without the admin having to give out list_storage_passwords for everyone if possible. Is there any workaround to this or are we still limited to the workarounds described below? E.g. having to give list_storage_passwords to everyone and then retroactively apply fine-grained access controls to every secret. How are devs accomplishing this? https://community.splunk.com/t5/Splunk-Dev/What-are-secret-storage-permissions-requirements/m-p/641409 --- This idea is 3.5 years old at this point. https://ideas.splunk.com/ideas/EID-I-368  
Oct 30 06:55:08 Server1 request-default Cert x.x.x.x - John bank_user Viewer_PIP_PIP_env vu01 Appl Test [30/Oct/2023:06:54:51.849 -0400] "GET /web/appWeb/external/index.do HTTP/1.1" 200 431 7 9 80809... See more...
Oct 30 06:55:08 Server1 request-default Cert x.x.x.x - John bank_user Viewer_PIP_PIP_env vu01 Appl Test [30/Oct/2023:06:54:51.849 -0400] "GET /web/appWeb/external/index.do HTTP/1.1" 200 431 7 9 8080937 x.x.x.x /junctions 25750 - "OU=00000000+CN=John bank_user Viewer_PIP_PIP_env vu01 Appl Test,OU=st,O=Bank,C=us" bfe9a8e8-7712-11ee-ab2e-0050568906b9 "x509: TLSV12: 30" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36" I have above in the log.  I have field extraction (regular expressions) to extract user and in this case "John bank_user Viewer_PIP_PIP_env vu01 Appl Test".  The alert did find this user but reported the user name as "john".  There are some other users, who have space in the name shows up in alert fine. How do I fix the extraction so entire user name shows up in the alert?
I am trying to use the following search to make a timechart on security incident sources, but Splunk is reporting zeros for all the counts which I can confirm is NOT accurate at all. I think the issu... See more...
I am trying to use the following search to make a timechart on security incident sources, but Splunk is reporting zeros for all the counts which I can confirm is NOT accurate at all. I think the issue is because I need to use a different time field for the timeline. Can someone assist me in making this chart work?   index=sir sourcetype=sir | rex field=dv_affected_user "(?<user>[[:alnum:]]{5})\)" | rex mode=sed field=opened_at "s/\.0+$//" | rex mode=sed field=closed_at "s/\.0+$//" | rename opened_at AS Opened_At, closed_at AS "Closed At", number AS "SIR Number", dv_assignment_group AS "Assignment Group", dv_state AS State, short_description AS "Short Description", close_notes AS "Closed Notes", dv_u_organizational_action AS "Org Action", u_concern AS Concern, dv_u_activity_type AS "Activity Type", dv_assigned_to AS "Assigned To" | eval _time=Opened_At | eval Source=coalesce(dv_u_specific_source, dv_u_security_source) | fillnull value=NULL Source | table Source, _time, "SIR Number" | timechart span=1mon count usenull=f by Source  
In Dashboard studio i have a panel with a list of the top 10 issuetypes. I want to set 3 tokens with nr 1, 2 and 3 of this top 10 to use thes in a different panel search to show the (full) events. i... See more...
In Dashboard studio i have a panel with a list of the top 10 issuetypes. I want to set 3 tokens with nr 1, 2 and 3 of this top 10 to use thes in a different panel search to show the (full) events. index=.....      ("WARNING -" OR "ERROR -") | rex field=_raw "(?<issuetype>\w+\s-\s\w+)\:" | stats count by application, issuetype | sort by -count | head 10 The result depends and might be: count issuetype 345 ERROR - Connectbus 235 Warning - Queries 76 Error - Export 45 Error - Client 32 Warning - Queue … Now i want to show the events of the top 3 issuetypes of this list in the following panels by storing the first 3 issuetypes in $tokenfirst$ $tokensecond$ and $tokenthird$ and searching for those values. I selected use search result as token, but how do i select only the first 3 results in 3 different tokens (and of course after the top 10 is calculated )
I am migrating to using auth0 for SAML which authenticates with active directory for splunk. Currenlty splunk just uses active directory. I have the realName field set to the “nickname” attribute in ... See more...
I am migrating to using auth0 for SAML which authenticates with active directory for splunk. Currenlty splunk just uses active directory. I have the realName field set to the “nickname” attribute in the saml response which is the username but when I run searches or make dashboards/alerts it is assigned to the user_id attribute which is gibberish. I’m wondering how we can make the knowledge objects assigned to the friendly username instead of the user_id because I’m curious if a user will still be able to see their historical knowledge objects since the owner value is now different. Unless it is somehow mapped to it. 
Hi, I am looking for some solution how to find in Splunk scheduled searches not used for several weeks by users or apps (for example user left and search is not checked). I tried to focus to audit lo... See more...
Hi, I am looking for some solution how to find in Splunk scheduled searches not used for several weeks by users or apps (for example user left and search is not checked). I tried to focus to audit logs for non ad hoc searches and rest API saved searches but I wasn't able to find some meaningful result for it
Hello, I had to rename a bunch of rules yesterday so I cloned them from the Searches, Reports, and Alerts dashboard. They all have global permissions (all apps). For some reason I can't find none of... See more...
Hello, I had to rename a bunch of rules yesterday so I cloned them from the Searches, Reports, and Alerts dashboard. They all have global permissions (all apps). For some reason I can't find none of the rules under the Content Management section. Is there a reason why the cloned rules aren't showing there? Thanks!  
I'm looking to close out (or delete) all notable events that were created prior to a specific date time.  The way they're trying to run reports, it is easier to delete them or close them than it woul... See more...
I'm looking to close out (or delete) all notable events that were created prior to a specific date time.  The way they're trying to run reports, it is easier to delete them or close them than it would be to filter them from the reports.  Is there a way to use an eval query (or similar) or would it be best to use the API to close them?  Or am I SOL and I need to filter from the dashboard / report query level?
Hello, I'm am wondering how other security service providers have handled this issue or what is best practice To plan for least privilege, indexes would be separated out by group. We could store al... See more...
Hello, I'm am wondering how other security service providers have handled this issue or what is best practice To plan for least privilege, indexes would be separated out by group. We could store all data related to the group in a respective index. Network traffic, network security, antivirus, Windows Event Data, etc all in a single index for the group and give that group permissions to the index. An issue with this scenario is search performance. Searches may be performed on network traffic. Or on host data. Or on antivirus data. But Splunk will have to search the bucket containing all of the other unrelated data. If antivirus data is only producing 5GB a day, but network traffic is producing 10TB a day, this will have a huge negative affect on searches for antivirus data. This will be compounded with SmartStore (S2) where IOPS will be used to write the bucket back to disk. If least privilege isn't an issue, it would be optimal to create a bucket for the specific data. Network traffic would have it's own index. Windows hosts would have their own index. But the crux of architecting in this fashion is how to implement least privilege. One group cannot be able to see the host data of the other group. One idea to get around this is to limit the search capability by host, but that would require much work from the Splunk team and is not 100% certain if wildcards are used. Another idea is to simply create a separate index for each data type for each group. My concern with this is scaling. If we have 10 separate groups that require 10 indexes, that's 100 indexes. If we have 50, that's 500 indexes. 100 is 1000. This does not scale well. Thank you in advance for your help    
Sample data: <?xml version="1.0" encoding="UTF-8" ?> <Results xmlns:xsi="http://www.w3.org"> <Result> <Code>OK</Code> <Details>LoadMessageOverviewData</Details> <Text>Successful</Text> </Resul... See more...
Sample data: <?xml version="1.0" encoding="UTF-8" ?> <Results xmlns:xsi="http://www.w3.org"> <Result> <Code>OK</Code> <Details>LoadMessageOverviewData</Details> <Text>Successful</Text> </Result> <Data> <ColumnNames> <Column>Sender&#x20;Component</Column> <Column>Receiver&#x20;Component</Column> <Column>Interface</Column> <Column>System&#x20;Error</Column> <Column>Waiting</Column> </ColumnNames> <DataRows> <Row> <Entry>XYZ</Entry> <Entry>ABC</Entry> <Entry>Mobile</Entry> <Entry>-</Entry> <Entry>3</Entry> </Row> </DataRows> </Data> </Results> Hello, I need to extract fields from the above xml data. I have tried the below props, but still the data is not extracting properly. Props.conf CHARSET=UTF-8 BREAK_ONLY_BEFORE = <\/Row> MUST_BREAK_AFTER = <Row> SHOULD_LINEMERGE  = true KV_MODE = xml pulldown_type = true DATETIME_CONFIG = CURRENT NO_BINARY_CHECK=true TRUNCATE=0 description=describing props config disabled=false How to parse the data.? Thanks in advance
Hi to everyone,  For a project, I need to deploy a test environnement with splunk and I need to capture stream log in order to to analyze it. For this project I have deployed a Splunk enterprise (9.... See more...
Hi to everyone,  For a project, I need to deploy a test environnement with splunk and I need to capture stream log in order to to analyze it. For this project I have deployed a Splunk enterprise (9.1.2) on an ubuntu 20.04 and on another VM (also ubuntu 20.04) I put my UF (9.1.2). In the UF I put the add-on Splunk Add-on for Stream Forwarders (8.1.1) to capture packet and on my splunk enterprise Splunk App for Stream (8.1.1).  I follow all installations and configurations steps and debug some issues but I still have an error that I don't know how to fix it. In the streamfwd.log files I see this error :  2024-01-24 06:14:03 ERROR [140599052777408] (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor unrecognized link layer for device <ens33>: 253 2024-01-24 06:14:03 FATAL [140599052777408] (CaptureServer.cpp:2337) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer ens33 is the right interface where I want to capture stream packet but I don't understand why it don't recognize it. If you have any idea I will be very gratefull.  
Hey everyone, I am in the situation where I have to provide a solution to a client of mine. Our application is deployed on their k8s and logs everything to stdout, where they take it and put it into... See more...
Hey everyone, I am in the situation where I have to provide a solution to a client of mine. Our application is deployed on their k8s and logs everything to stdout, where they take it and put it into a splunk index, let's call the index "standardIndex". Due to a change in legislation and a change in how they operate under this legislation change, we need to log specific logs based on the message content (easiest for us..) to a special index we can call "specialIndex". I managed to rewrite the messages we log, to satisfy their needs in that regard, but now I fail to log those to a separate index. The collectord annotations I put in our patch look like this, and they seem to work just fine:       spec: replicas: 1 template: metadata: annotations: collectord.io/logs-replace.1-search: '"message":"(?P<message>Error while doing the special thing\.).*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.1-val: '${timestamp} message="${message}" applicationid=superImportant status=failed' collectord.io/logs-replace.2-search: '"message":"(?P<message>Starting to do the thing\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.2-val: '${timestamp} message="${message}" applicationid=superImportant status=pending' collectord.io/logs-replace.3-search: '"message":"(?P<message>Nothing to do but completed the run\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.3-val: '${timestamp} message="${message}" applicationid=superImportant status=successful' collectord.io/logs-replace.4-search: '("message":"(?P<message>Deleted \d+ of the thing [^\s]+ where type is [^\s]+ with id)[^"]*").*"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.4-val: '${timestamp} message="${message} <removed>" applicationid=superImportant status=successfull'       My only remaining goal is to send these specific messages to a specific index, and this is where I can't follow the outcold documentation really well. Actually, I am even doubting this is possible but I fail to understand it completely. Does anyone have a hint?
We encounter an issue with our iislogs in an azure storage account. Our logging data becomes duplicated at the end of the hour when the last modified of a closed log file is updated. This is caused b... See more...
We encounter an issue with our iislogs in an azure storage account. Our logging data becomes duplicated at the end of the hour when the last modified of a closed log file is updated. This is caused by a known bug in the azure extension that we are using and which we cannot update. However the behaviour of the plugin causes the duplication in logs. An example error can be seen below: 2024-01-24 12:03:09,811 +0000 log_level=WARNING, pid=7648, tid=ThreadPoolExecutor-1093_9, file=mscs_storage_blob_data_collector.py, func_name=_get_append_blob, code_line_no=301 | [stanza_name="prd10915-iislogs" account_name="prd10915logs" container_name="iislogs" blob_name="WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log"] Invalid Range Error: Bytes stored in Checkpoint : 46738047 and Bytes stored in WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log : 46738047. Restarting the data collection for WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log  The error happens in the %SPLUNK_HOME%\etc\apps\Splunk_TA_microsoft-cloudservices\lib\mscs_storage_blob_data_collector.py file on line 280. The blob stream downloader expects more bytes than the known checkpoint and produces an exception when the amount of bytes is the same. This exception is then handled by this piece of code: blob_stream_downloader = blob_client.download_blob( snapshot=self._snapshot ) blob_content = blob_stream_downloader.readall() self._logger.warning( "Invalid Range Error: Bytes stored in Checkpoint : " + str(received_bytes) + " and Bytes stored in " + str(self._blob_name) + " : " + str(len(blob_content)) + ". Restarting the data collection for " + str(self._blob_name) ) first_process_blob = True self._ckpt[mscs_consts.RECEIVED_BYTES] = 0 received_bytes = 0 Where the blob is marked as new and fully redownloaded and ingested. Causing our data duplication. We would like to request a change to the addon that prevents this behaviour from happening when the checkpoint byte count is equal to the log file byte count. The addon should not assume that a file has grown in size if the last modified timestamp is changed.
Hi Team, My requirement is Universal Forwarder installation on Kubernetes On-premises system.   Please send me guide on installation on Kubernetes.       
Hi, I have the below SPL and I am not able to get the expected results. Please could you help? if i use stats count by - then i'm not getting the expected result as below. SPL: basesearch earlies... See more...
Hi, I have the below SPL and I am not able to get the expected results. Please could you help? if i use stats count by - then i'm not getting the expected result as below. SPL: basesearch earliest=@d latest=now | append [ search earliest=-1d@d latest=-1d] | eval Consumer = case(match(File_Name,"^ABC"), "Down", match(File_Name,"^csd"),"UP", match(File_Name,"^CSD"),"UP",1==1,"Others") | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Consumer Today Yesterday percentage_variance Expected Result: Name Consumer Today Yesterday percentage_variance TEN UP 10 10 0.0%
We want to install splunk in our golden image using packer .This is for deploying servers using golden images in Azure for RHEL8 and Ubuntu22. I found documentation for Windows Integrate a univer... See more...
We want to install splunk in our golden image using packer .This is for deploying servers using golden images in Azure for RHEL8 and Ubuntu22. I found documentation for Windows Integrate a universal forwarder onto a system image - Splunk Documentation  Not for RHEL/UBUNTU  Any help appreciated.
Hello, for a dashboard the user want every time when he opens the dashboard that the canvas size is fit to his screen. How can i define this ?
      01-24-2024 10:24:31.312 +0000 WARN sendmodalert [3050674 AlertNotifierWorker-0] - action=slack - Alert action script returned error code=1 01-24-2024 10:24:31.312 +0000 INFO sendmodalert [... See more...
      01-24-2024 10:24:31.312 +0000 WARN sendmodalert [3050674 AlertNotifierWorker-0] - action=slack - Alert action script returned error code=1 01-24-2024 10:24:31.312 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack - Alert action script completed in duration=96 ms with exit code=1 01-24-2024 10:24:31.304 +0000 FATAL sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Alert action failed 01-24-2024 10:24:31.304 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Slack API responded with HTTP status=200 01-24-2024 10:24:31.304 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Using configured Slack App OAuth token: xoxb-XXXXXXXX 01-24-2024 10:24:31.304 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Running python 3 01-24-2024 10:24:31.212 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - Invoking modular alert action=slack for search="Updated Testing Nagasri Alert" sid="scheduler_xxxxx__RMDxxxxxxx" in app="xxxxx" owner="xxxx" type="saved"       I have done the entire setup correctly , created an app with chat:write scope and added the channel to the app. got the oauth token and the webhook link of the channel. But the sendalert is failing with error code 1. And the git "slack-alerts/src/app/README.md at main · splunk/slack-alerts (github.com)" , doesnt mention about it.  Is it an issue from Splunk end or Slack end? What would be the fix for it?  
When I was searching  for the different data ranges in my Splunk dashboard it showed the same, for example, i am selecting 1/1/2024 to 1/10/2024 and  1/3/2024 to 1/4/2024 and i am adding this query... See more...
When I was searching  for the different data ranges in my Splunk dashboard it showed the same, for example, i am selecting 1/1/2024 to 1/10/2024 and  1/3/2024 to 1/4/2024 and i am adding this query earliest=-7d@d latest=+1d but when removed these values do not match  Please help out with this
Hi All, I need to collect system metrics and monitor local files on Solaris servers. I'm considering installing the Universal Forwarder (UF) and utilizing the Splunk add-on for Unix to collect sys... See more...
Hi All, I need to collect system metrics and monitor local files on Solaris servers. I'm considering installing the Universal Forwarder (UF) and utilizing the Splunk add-on for Unix to collect system metrics. Has anyone implemented this before, and any insights or thoughts on this approach?