All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello dm1, Were you able to migrate Search Head On premises to AWS?  If so, can you please share the steps/process which you have followed for the migration. Thanks
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-... See more...
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-on   Note:  The Splunk instance is in a different account than the S3 bucket. Error response received from the server: Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied". See splunkd.log/python.log for more details.   I have created an AWS role to allow the user residing in  account where my S3 bucket is and the permissions are like below   Trust relationship:     The user contains s3full access and AssumeRole policy attached to it.    Splunk config: The IAM role still shows undiscovered:     Are there any changes required at the Splunk instance level in the other account so that it could access the policy? TIA for your help!
Hi @gcusello , Thank you for your feedback. I got expecting work. BR
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and f... See more...
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and for both of them we can change the status to indicate when the investigation is in process or not, but I am not quite sure about how SOC actually uses these features. That's why I have couple of questions regarding that.  1) Do analysts always start investigation when they are about to handle a notable in the incident review tab?   Probably the first what analysts do is changing the status from new to "in progress" and assign the event to themselves, to indicate that they are handling notable, but do they also start a new investigation or add them to the existing one, or analyst can handle the notable without adding it to an existing one or starting the new investigation? 2) When a notable was added to an investigation, what do analysts do when they close they figure out the disposition (complete their investigation)? Do they merely change the status through editing the investigation and the notable in their associated tabs? Do they always put their conclusions about an incident in the comment section like described in this article: The Five Step SOC Analyst Method. This 5-step security analysis… | by Tyler Wall | Medium? 3) Does SOC analyst of the first level directly put the status "closed" when the notable/investigation  is completed, or he/she always has to put it to "resolved" for their more-experienced colleagues' confirmation? I hope my questions are clear, thanks for taking your time reading my post and replying to it  
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splu... See more...
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splunk are mixed. AA:BB:CC:00:11:22 AA-BB-CC-00-11-22 I would like to unify the MacAddress field in the log in the form of “AA:BB:CC:00:11:22” in advance, because I would like to link the host name from MacAddress in the automatic definition of LookUpTable. Put the following in the search field and output the modified one as “MacAddr”, index=“Log” | rex ^. +? \scli\s}? <CL_MacAddr>. +? (. +?)) \) | eval MacAddr = replace(CL_MacAddr,“-”,“:”) Alternatively, we could replace the existing field “CL_MacAddr” with a modified version as follows. index=“Log” | rex mode=sed field=“CL_MacAddr” “s/-/:/g” I am trying to set this in the GUI's field extraction and field transformation to always have the modified superscript, but it does not work. Or can it be set directly in transforms.conf, but in this case, what values can be set and where? I know this is basic, but I would appreciate your help. Thank you in advance.
Update: it actually did work! I just opened the dashboard in a search and the time-picker is indeed applied.
No. I commented out  the code about proxy setting.   
hi   I want to how to change from the archive state to active of splunkbase site. I have already submitted a new version of  the addon file. The addon is still in archive status.
One minor request, if this logging is ever enhanced can it please include the output group name. 05-16-2024 03:18:05.992 +0000 WARN AutoLoadBalancedConnectionStrategy [85268 TcpOutEloop] - Current d... See more...
One minor request, if this logging is ever enhanced can it please include the output group name. 05-16-2024 03:18:05.992 +0000 WARN AutoLoadBalancedConnectionStrategy [85268 TcpOutEloop] - Current dest host connection <ip address>:9997, oneTimeClient=0, _events.size()=56156, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Thu May 16 03:18:03 2024 is using 31477941 bytes. Total tcpout queue size is 31457280. Warningcount=1001 Is helpful, however the destination IP happens to be istio (K8s software load balancer) and I have 3 indexer clusters with different DNS names on the same IP/port (the incoming DNS name determines which backend gets used). So my only way to "guess" the outputs.conf stanza involved is to set a unique queue size for each one so I can determine which indexer cluster / output stanza is having the high warning count. If it had tcpout=<stanzaname> or similar in the warning that would be very helpful for me. Thanks
As always, the answer depends a lot on data characteristics and the real problem you are trying to solve.  Maybe you can explain why the second look, which is highly unconventional, is more desirable... See more...
As always, the answer depends a lot on data characteristics and the real problem you are trying to solve.  Maybe you can explain why the second look, which is highly unconventional, is more desirable?  Is it safe to say that search_name, ID, and Time are a triplet that should be treated as a unit?  In that case, wouldn't this form be more human friendly? Time search_name ID 13:27:17 UC-315 7zAt/7 13:27:17 UC-231  5Dfxdf (This, of course is the default time series aka Splunky presentation.)
To search on the resource field, use the where command. index="xxxx" "role-'WRITE'" OR "role-'READ'" | rex "User-(?<userid>[^,]*)" | rex "(?<resource>\w+)$" | where resource="GHIJKL" | eval userid=u... See more...
To search on the resource field, use the where command. index="xxxx" "role-'WRITE'" OR "role-'READ'" | rex "User-(?<userid>[^,]*)" | rex "(?<resource>\w+)$" | where resource="GHIJKL" | eval userid=upper(userid) | stats c as Count latest(_time) as _time by userid  
Per the outputs.conf.spec file, # These settings are only applicable under the global [tcpout] stanza. # This filter does not work if it is created under any other stanza.
Hi, Did you check sslVersions in authentication.conf and server.conf? Check that the SSL version is consistent among cluster members. Regards.
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am plannin... See more...
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am planning to forward only some logs to the new cluster based on the index name. For example, consider that between the three indexes of Windows, Linux, and Firewall, I'm going to send only Firewall to the new cluster. This is the configuration that I tried to create:   [tcpout] defaultGroup = dag,dag-n [tcpout:dag] disabled = false server = p0:X,p1:Y [tcpout:dag-n] disabled = false server = pn:Z forwardedindex.0.whitelist = firewall forwardedindex.1.blacklist = .*   but unfortunately, some logs of both Windows and Linux indexes are still sent to the new cluster, and because an index is not considered for them in the new cluster, they frequently cause errors. The thing that came to my mind was that maybe I should empty the default whitelist and blacklist first. Anyone have any ideas?
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config o... See more...
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config options in the App or Add-on that I can see.   Any thoughts?
I don't know much about role/permissions but I have created dozens of other alerts. The only difference I can tell is all my other alerts use searches that return events when triggered. This search... See more...
I don't know much about role/permissions but I have created dozens of other alerts. The only difference I can tell is all my other alerts use searches that return events when triggered. This search only returns results in the statistics section.
Hi If your DS is 9.2.x, have you read this https://community.splunk.com/t5/Deployment-Architecture/The-Client-forwarder-management-not-showing-the-clients/m-p/677225#M27893? r. Ismo
Hi this should works. Are you sure that you have granted a role which allow you to run alerts? r. Ismo
Hi there is order which defines how those are extracted, aliased etc. You can see it e.g. here https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Knowledge/Searchtimeoperationssequence. Base... See more...
Hi there is order which defines how those are extracted, aliased etc. You can see it e.g. here https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Knowledge/Searchtimeoperationssequence. Based on that you see that in extract phase you cannot use aliases as those are applied after all extractions have done. r. Ismo
Hi there some apps, which you could use to generate continuously sample data. Probably mostly used is eventgen https://splunkbase.splunk.com/app/1924.  One option is just e.g. tar that index and ... See more...
Hi there some apps, which you could use to generate continuously sample data. Probably mostly used is eventgen https://splunkbase.splunk.com/app/1924.  One option is just e.g. tar that index and untar it on target systems. Of course you need to do also app for defining it on those target systems. This needs that those nodes are enough equal like same Linux etc. If I recall right something like this was done for those older bots datasets? r. Ismo