All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a sub... See more...
Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a subject name of COMPUTER8-55$. What would the regex for that look like? 05/10/2024 01:05:35 PM LogName=Sec EventCode=4670 EventType=0 ComputerName=myComputer.net SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=10000000 Keywords=Audit Success TaskCategory=Authorization Policy Change OpCode=Info Message=Permissions on an object were changed.   Subject: Security ID: S-0-20-35 Account Name: COMPUTER8-55$ Account Domain: myDomain Logon ID: 0x3E7   Object: Object Server: Security Object Type: Token Object Name: - Handle ID: 0x1718   Process: Process ID: 0x35c Process Name: C:\Windows\System32\svchost.exe  
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usag... See more...
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usage. I could not find anything about this in the logs. Could someone help me to understand relationship between disk quota utilization vs triggered alert retention period?   
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table ... See more...
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table with Identity and combi_fields. In combi fields i want to check for a data whether it is same in all the mutivalued data for a given Identity. For example, Identity                                  combi_fields ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   abcdefg - 113 - 110 - Passed - folder1- folder2 In the above example all the 1st data is same. If it is same i have to consider the greatest number and give its status as output. Like ABC abcdefg  Passed there might be different data in the 1 st place like below ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   xyzabc- 113 - 110 - Passed - folder1- folder2                                                   xyzabc- 201 - 219- Passed - folder1- folder2 Here is hould show as ABC abcdefg Passed                                              ABC xyzabc Passed.   How can i do this? How can i compare among a field?  
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For ... See more...
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For reference I've tried to use that endpoint and match it with saved searches only (reports) and not to return any alerts.  But the data returned has a lot more than expected as the number in the "reports" tab under "all apps" is a lot smaller than the number returned from the REST call   Any help or link to docs would be appreciated  
Why this addon is not supported anymore? Is there any other alternative for OT/ICS data?  
Hello Team, I had followed steps mentioned in below page for migration to Splunk Enterprise version 9.2.1: Upgrade to version 9.2 on UNIX - Splunk Documentation I receive below error on running st... See more...
Hello Team, I had followed steps mentioned in below page for migration to Splunk Enterprise version 9.2.1: Upgrade to version 9.2 on UNIX - Splunk Documentation I receive below error on running start command. Due to this error, I am unable to complete the migration on Splunk indexer machine. Warning: cannot create "/data/splunk/index_data" Creating: /data/splunk/index_data ERROR while running renew-certs migration.
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard X... See more...
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard XML definition as per follow this guideline data/UI/views/{name}. Thanks for your support.
Hi All, I have a query which returns results for a particular month like how many tickets breached SLA. The month and year is hardcoded to the query. Now, I am wanting not to hard code the month in ... See more...
Hi All, I have a query which returns results for a particular month like how many tickets breached SLA. The month and year is hardcoded to the query. Now, I am wanting not to hard code the month in the query, instead use it in output - so that user can select the month to get the results. Could you please help here? Query Results: TicketCountSLABreached(TCSB)  TotalTicketCount(TTC)  IncResolutionTime(TCSB/TTC*100)    TimeStamp 2                                                                    3                                              66.667                                                             February 2024
Hello Splunkers! In the Security Posture by default there are no filters that would allow us to adjust the time, meaning, we see the summary about notable events over the last 24 hours. I want to ... See more...
Hello Splunkers! In the Security Posture by default there are no filters that would allow us to adjust the time, meaning, we see the summary about notable events over the last 24 hours. I want to change that, I have added a time picker that I would like to bind to one dashboard in the security posture - "Key indicators" so that I can see for example the summary of notable events over the last 12 hours or 7 days. Can someone please explain what needs to be done on time picker or dashboard in order to achieve this, or maybe is there an easier way to do this?  Thanks for taking your time reading and replying to my post
| inputlookup E.csv | search 4Let="ABCD" | stats count as count3 [search index=xyz category="Ad" "properties.OnboardingStatus"= Onboarded | dedup properties.DeviceName | rename properties.DeviceName... See more...
| inputlookup E.csv | search 4Let="ABCD" | stats count as count3 [search index=xyz category="Ad" "properties.OnboardingStatus"= Onboarded | dedup properties.DeviceName | rename properties.DeviceName as DeviceName | stats count as count2] this search is giving error
I would like to have an investigation created with a notable event recorded in there using the API. I've been trying to achieve this by adding a notable event to an ES investigation using the API.  ... See more...
I would like to have an investigation created with a notable event recorded in there using the API. I've been trying to achieve this by adding a notable event to an ES investigation using the API.  So far I have been able to create an investigation and then add an artifact to it using the API. Next step I need to complete is to insert a notable event into an ES investigation using the API.    Alternatively if its possible to create an investigation from a notable using the API then I would also be happy with that option.
Hi, For the migration of data we need to use Smart Store from splunk Please help us to understand the below pointers: Smart Store is available for on prem implementation. Costing How do you siz... See more...
Hi, For the migration of data we need to use Smart Store from splunk Please help us to understand the below pointers: Smart Store is available for on prem implementation. Costing How do you size the solution?
In Python script I get a below error in internal logs TypeError: Object of type bytes is not JSON serializable We are using python 3 May I know how to get rid of this error in internal logs?... See more...
In Python script I get a below error in internal logs TypeError: Object of type bytes is not JSON serializable We are using python 3 May I know how to get rid of this error in internal logs?  
Hi Splunkers..  on linux when i try to do wget linux download, it says download.splunk.com is not trusted.  Could you pls check it, thanks.    Best Regards Sekar 
Configuring Log Observer, getting error: Unable to create Splunk Enterprise Cloud client. Invalid or incorrect splunkenterprisecloud certificate following these instructions: https://app.us1.signa... See more...
Configuring Log Observer, getting error: Unable to create Splunk Enterprise Cloud client. Invalid or incorrect splunkenterprisecloud certificate following these instructions: https://app.us1.signalfx.com/#/logs/connections/enterpriseCloud/new
I have been asked to create a dashboard for our threat hunters and would like some ideas. They want to know what they can breach off of webservers.  So far I have a table with just host we have. I... See more...
I have been asked to create a dashboard for our threat hunters and would like some ideas. They want to know what they can breach off of webservers.  So far I have a table with just host we have. I also have a table with http response counts. 
the universal forwarder does not parse data except in certain limited situations. can anyone tells what are these situations?
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phonin... See more...
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phoning to DS, but then apps are not downloaded. I checked the internal logs and I got no error related to app. I followed doc and course material used during Architect course for references. Below, configuration I made on DS. App name:      /opt/splunk/etc/deployment-apps/hf_seu_outputs/       App file     /opt/splunk/etc/deployment-apps/hf_seu_outputs/default/app.conf [ui] is_visible = 0 [package] id = hf_outputs check_for_updates = 0       /opt/splunk/etc/deployment-apps/hf_seu_outputs/local/outputs.conf [indexAndForward] index=false [tcpout] defaultGroup = default-autolb-group forwardedindex.filter.disable = true indexAndForward = false [tcpout:default-autolb-group] server=<idx1_ip_address>:9997, <idx2_ip_address>:9997, <idx3_ip_address>:9997     serverclass.conf:   [serverClass:spoke_hf:app:hf_seu_outputs] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:spoke_hf] whitelist.0 = <HF1_ip_address>, <HF1_ip_address>   File and folder permission are right, owner is the user used to execute Splunk (in a nutshell, the owner of /opt/spluk). I suppose it is a very stupid issue, but I'm not able to figured it out.
Hi All, Below query to get stats sum of field values of latest correlationId. need to show in pie chart. But i am getting values as other.PFA screenshot   index="mulesoft" *Upcoming Executions* c... See more...
Hi All, Below query to get stats sum of field values of latest correlationId. need to show in pie chart. But i am getting values as other.PFA screenshot   index="mulesoft" *Upcoming Executions* content.scheduleDetails.lastRunTime="*" [search index="mulesoft" *Upcoming Executions* environment=DEV | stats latest(correlationId) as correlationId | table correlationId|format]|rename content.scheduleDetails.lastRunTime as LastRunTimeCount | stats count(eval(LastRunTimeCount!="NA")) as LastRunTime_Count count(eval(LastRunTimeCount=="NA")) as NA_Count by correlationId| stats sum(LastRunTime_Count) as LastRunTime_Count,sum(NA_Count) as NA_Count    
Hello, I need some help.  I have a folder and an app that writes logs in NDJSON format and creates a new log file every 15 minutes.  The configuration that I use is this:   [monitor:///Users/yot... See more...
Hello, I need some help.  I have a folder and an app that writes logs in NDJSON format and creates a new log file every 15 minutes.  The configuration that I use is this:   [monitor:///Users/yotov/app/.logs/.../*.log] disabled = false sourcetype = ndjson crcSalt = <SOURCE> alwaysOpenFile = 1    The problem is that Splunk Forwarder doesn't detect newly added files. It reads only the files at the start, and detects newly added content in them, but when a new file is added they are ignored until restart of Splunk Forwarder. I'm using the latest version of Splunk Forwarder and tried under Linux and MacOs What am I missing?