All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You are correct about the size being affected by backup and save files.  The limit.conf setting is for the base file only so the total could be 4x that value. I don't see how the nmon TA has any bea... See more...
You are correct about the size being affected by backup and save files.  The limit.conf setting is for the base file only so the total could be 4x that value. I don't see how the nmon TA has any bearing on this.
Yes @gcusello , i try by private navigation, and by clear brower cach, but i have the same problem
Thank you very much for the support @yuanliu . Your query works perfectly. However, it gave me more results than expected. Basically, I want only to see logs from Palo that only threat field equal... See more...
Thank you very much for the support @yuanliu . Your query works perfectly. However, it gave me more results than expected. Basically, I want only to see logs from Palo that only threat field equal to "SMB: User Password Brute Force Attempt(40004)" and resolve Windows logs fields (ComputerName & username) + Symantec logs fields (Symantec Detected User @ Destination & Symantec Destination Node) based on the dest_ip value of the Palo logs.  For your understanding below is what I get from your query (10000 statistics - 24hrs): I need something similar to the below table (106 statistics - Last 24hrs): Once again thank you very much for your support in this. Hope you can figure out my real requirement with this? Cheers, NeoKevin
Your sample events didn't have duplicates in. Please share some representative unformatted events and explain what your expected results would be from those events.
Some KOs can be assigned to new apps.  Look for "move" as an edit option. For those KOs that cannot be moved, you'll have to do so manually.  Create a new object by the same name in another app and ... See more...
Some KOs can be assigned to new apps.  Look for "move" as an edit option. For those KOs that cannot be moved, you'll have to do so manually.  Create a new object by the same name in another app and copy/paste the values from the original app.  Then delete the original object.  In Splunk Cloud, do this in an app you then upload.  If the original app is Search then you cannot upload a replacement app and will have to manipulate the object in the UI.
Hi @ITWhisperer  THANKS for the above query which worked to get the data from that json in a table form.but data are displayed as duplicate/doble index="" source IN ("") "request body"| spath | spa... See more...
Hi @ITWhisperer  THANKS for the above query which worked to get the data from that json in a table form.but data are displayed as duplicate/doble index="" source IN ("") "request body"| spath | spath input=eventBody,eventBody.objectIds{}  
Great that works when using between - but not for Last XXX, i get this error;  Error in 'eval' command: The expression is malformed. Expected ). Could you please explain why this works. This i... See more...
Great that works when using between - but not for Last XXX, i get this error;  Error in 'eval' command: The expression is malformed. Expected ). Could you please explain why this works. This is OK, as ideally i would like to only have the option on the time picker of DATE RANGE -- BETWEEN. I think i need to make a css file for this to work - do you know if its possible to do this without css?
Hi Splunkers, in our Splunk Cloud environment we had 2 need: Reassign knowledge object owner Reassign Knowledge object app The first point management is well known and we already applied it, ass... See more...
Hi Splunkers, in our Splunk Cloud environment we had 2 need: Reassign knowledge object owner Reassign Knowledge object app The first point management is well known and we already applied it, assigning all KOs created by us to a service account. I don't remember if the second one is possible: can I reassign a KO app? For example, if we assigned an alert to search app, is it possible to change it with another one? And if yes, how?
Hi @beneteos , yes, use a correct reference hardware and monitor resources use. Ciao. Giuseppe 
Hi @gcusello and thank you for your answer ! We do not use DS for Splunk Forwarders management (we do that with puppet), so we only have 2 clients, our indexers, in order to replicate HEC configs an... See more...
Hi @gcusello and thank you for your answer ! We do not use DS for Splunk Forwarders management (we do that with puppet), so we only have 2 clients, our indexers, in order to replicate HEC configs and so get the same tokens on both indexers. Normally we don't need to change these configs except to add new HEC inputs, so really ponctually. As for the Monitoring Console, we are basically afraid of warnings saying that it could lead to incomplete search results. But as I said, we only have two indexers that are already configured as search peers on the SH, so it's exactly the same pool of indexers that we want to integrate in MC distributed mode. In terms of load, our search head is used very sparingly, with only a few searches per day. I think there's a maximum of ten connections/searches per day. So if I understand you correctly, there are no real risks of functionality loss, the question is more about load, right ?
That worked like a charm!!  Thanks again! 
Since this is JSON, if you haven't already ingested it as JSON, you can extract the fields with the spath command | spath | spath input=eventBody
After @fredclown above stated there was no way to do a fourth dimension in a Bar Chart, I opened a case with Splunk. Low and behold, he was right, which lead to some changes in our SPL. I thought I w... See more...
After @fredclown above stated there was no way to do a fourth dimension in a Bar Chart, I opened a case with Splunk. Low and behold, he was right, which lead to some changes in our SPL. I thought I would share, just in case somebody else runs into the same issue. ```Grab badging data for the previous week``` index=* sourcetype="ms:sql" CARDNUM=* earliest=-4w@w latest=-0w@w | bin span=1d _time | rename CARDNUM as badgeid | stats count by badgeid _time | join type=left badgeid ```Use HR records to filter on only Active and LOA employees``` [search index=identities sourcetype="hr:ceridian" ("Employee Status"="Active" OR "Employee Status"="LOA*") earliest=-1d@d latest=@d | eval "Employee ID"=ltrim(tostring('Employee ID'),"0") | stats count by "Employee ID" _time | fields - time | rename "Employee ID" as employeeID | fields - count ```Filter on Hybrid Remote users in Active Directory that are not Board Members and are in the Non-Branch region``` | lookup Employee_Data_AD_Extract.csv employeeID OUTPUT badgeid badgeid_1 RemoteStatus District employeeID Region] | where like(RemoteStatus,"%Hybrid%") AND NOT like(District,"Board Members") AND Region="Non-Branches" | eval badgeid=coalesce(badgeid,badgeid_1) ```Calculate the number of badge check-ins in a given week by badgeid``` | bin span=1w _time |stats latest(Region) as Region latest(employeeID) as employeeID latest(District) as District latest(RemoteStatus) as status count as "weekly_badge_in" by badgeid _time ```Calulation to determine the number of employees within District that are Hybrid Remote but have not badged-in``` | join District [| inputlookup Employee_Data_AD_Extract.csv | fields badgeid badgeid_1 RemoteStatus District employeeID Region | where like(RemoteStatus,"%Hybrid%") AND NOT like(District,"Board Members") ```AND NOT like(District,"IT")``` AND NOT like(District,"Digital") AND Region="Non-Branches" | stats count as total by District] | eval interval=case('weekly_badge_in'>=3,">=3", 'weekly_badge_in'<3,"<3") | table _time District interval total ```Modify District Here vvv``` | where District="Compliance" | stats max(total) as total count as total_intervals by _time District interval | sort District - _time | fields - District | chart max(total) as total_emp max(total_intervals) as total by _time interval | rename "total: <3" as "<3" "total: >=3" as ">=3" "total_emp: <3" as total | fields - "total_emp: >=3" | stats sum(eval(total-('<3'+'>=3'))) as no_badge_ins last("<3") as "<3" last(">=3") as ">=3" by _time | rename _time as week_of | eval week_of=strftime(week_of,"%Y-%m-%d")  
I have two inputs on my dashboard studio (json) and the inputs are dynamic, the first is a multiselect dynamic input and the second is a dynamic dropdown. The first input shows a list of rule names a... See more...
I have two inputs on my dashboard studio (json) and the inputs are dynamic, the first is a multiselect dynamic input and the second is a dynamic dropdown. The first input shows a list of rule names and the second dropdown input is a sensitive label. When I select multiple rule names from the multiselect input dropdown, I would like to see the corresponding sensitive labels populated dynamically in the next dropdown. But currently it is showing a warning sign as no results found.  Whereas the same is working fine when I select single value in the multiselect input. Please help to fix.
    Hi Team, First Event where i need to retrieve the uniqObjectIds {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"uniqObjectIds":["275649"],"uniqObjectIdsCount":1,"msg":"unique o... See more...
    Hi Team, First Event where i need to retrieve the uniqObjectIds {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"uniqObjectIds":["275649"],"uniqObjectIdsCount":1,"msg":"unique objectIds","time":"","v":0} below is one event where i want the fields objecttype,objectids,version to retrieve {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"eventBody":{"objectType":"material","objectIds":["275649"],"version":"latest"},"msg":"request body","time":"2023-11-06T22:48:03.330Z","v":0} Wanted to retrieve above two events data in the below query index="" source IN ("") | eval PST=_time-28800 | eval PST_TIME=strftime(PST, "%Y-%d-%m %H:%M:%S") | eval split_field= split(_raw, "Z\"}") | mvexpand split_field | rex field=split_field "objectIdsCount=(?<objectIdsCount>[^,]+)" | rex field=split_field "uniqObjectIdsCount=(?<uniqObjectIdsCount>[^,]+)" | rex field=split_field "recordsCount=(?<recordsCount>[^,]+)" | rex field=split_field "sqsSentCount=(?<sqsSentCount>[^,]+)"|where objectType="material" | table_time,PST_TIME,objectType,objectIdsCount,uniqObjectIdsCount,recordsCount,sqsSentCount | sort _time desc  
On the surface, yes it's a little odd, but it's hard to say for sure without knowing more about the nature of the data and the search.
Hi @beneteos , usually isn't a best practice to have the Deployment Server on the same server of Search Head, this is possible if you haven't a large use of SH and you have less than 50 clients to m... See more...
Hi @beneteos , usually isn't a best practice to have the Deployment Server on the same server of Search Head, this is possible if you haven't a large use of SH and you have less than 50 clients to manage with DS; otherwise you need a dedicated server for DS. About Monitoring Console, you can use it on the Search Head if you haven't a large use of SH: monitor the resource usage of your SH. About the configuration, you have to configure all the servers as search peers for the Search Head. Ciao. Giuseppe
we have recently upgraded from splunk 8.x to 9.x after which all python scripts are failing with ssl errors we have updated all packages according to python 3.7 but still it throws ssl error File "/... See more...
we have recently upgraded from splunk 8.x to 9.x after which all python scripts are failing with ssl errors we have updated all packages according to python 3.7 but still it throws ssl error File "/apps/splunk/etc/apps/xxxx/bin/splunklib/binding.py", line 32, in <module> import ssl File "/apps/splunk/lib/python3.7/ssl.py", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: libssl.so.1.0.0: cannot open shared object file: No such file or directory. In path /apps/splunk/lib i have libssl.so.1.0.0  
Hello, We have migrated our standalone installation of Splunk Enterprise to a "Small enterprise distributed deployment". This is a really small distributed deployment because the load is essentiall... See more...
Hello, We have migrated our standalone installation of Splunk Enterprise to a "Small enterprise distributed deployment". This is a really small distributed deployment because the load is essentially on indexing capacity, even though it's less than 100Go daily (our licence allows 80Go) and search load is really low. So we have : - 1 Search Head - 2 indexers (no cluster) The search head also acts as license master and deployment server (just HEC configs and indexes replication to indexers). Now the question is : Is it possible to install Monitoring Console on the Search Head node ? We've well seen the recommandation here, and especially : "When you set up the monitoring console in distributed mode, it creates one search group for each server role, identified cluster, or custom group. Unless you use a "splunk_server_group" or the "splunk_server" option, only search peers that are members of the indexer group are searched by default. Because all searches that run on the monitoring console instance follow this behavior, non-monitoring console searches might have incomplete results." I'm not sure I really understand this, but as we only have 2 indexers and since they are the nodes that we want to put in the indexer group on the MC side, could it really leads to incomplete searchs ? It seems that this is the same advice given on dashboard, via the MC general setup page when trying to activate in distributed mode : "Do not configure the DMC in distributed mode if this is a production search head. Doing so can change the behavior of all searches on this instance. This is dangerous and unsupported." As already said, load consideration is secondary because we do not have a heavy searching activity. Thanks a lot.
Thanks for your reply! I guess I should clarify my question though - I can figure out how to generate them, the question is where do I put them? Do I create additional fields in the lookup for the u... See more...
Thanks for your reply! I guess I should clarify my question though - I can figure out how to generate them, the question is where do I put them? Do I create additional fields in the lookup for the user and somehow splunk will use that field? Make the identify field a multivalue field?