All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am looking for the top 20 memory usage by individual users over a period of time that is the middle part of the range, for example between 128 and 256 GB. I have already figured the first part o... See more...
I am looking for the top 20 memory usage by individual users over a period of time that is the middle part of the range, for example between 128 and 256 GB. I have already figured the first part of what I am looking for in the original search query, which is the top 20 memory usage by individual users over a period of time. I am now looking to see if its possible to see that number but over a certain range instead.
Please clarify your requirement. Do you want the top 20 memory over the whole period used and which user that was and on which day? Or, do you want the top 20 users who used the most memory each day?... See more...
Please clarify your requirement. Do you want the top 20 memory over the whole period used and which user that was and on which day? Or, do you want the top 20 users who used the most memory each day? Or, do you want the top 20 users who used the most memory over all and how much they used each day? Or something else?
I am getting results back but it is a bit different than the desired results.  The original search query results is separated by USER and that is what I am looking for. This is the result I... See more...
I am getting results back but it is a bit different than the desired results.  The original search query results is separated by USER and that is what I am looking for. This is the result I am getting after running your search query. It is only including the top 20 Mem_Used_GB and not seperated by USER. Is there a way to separate it by USER also?  
Hello everyone! This is not a significant issue, but sometimes I observe a somewhat strange behavior after pushing a configuration bundle from the deployer. I have numerous default apps containing ... See more...
Hello everyone! This is not a significant issue, but sometimes I observe a somewhat strange behavior after pushing a configuration bundle from the deployer. I have numerous default apps containing an app.conf file under the local directory with the content below: "C:\Program Files\Splunk\etc\shcluster\apps\search\local\app.conf" > [shclustering] deployer_push_mode = local_only   To push a bundle I use this command: splunk apply shcluster-bundle -target $host -auth $authString -push-default-apps yes --answer-yes   After successfully pushing the bundle, I sometimes observe the warning below on a random search head: File Integrity checks found 4 files that did not match the system-provided manifest. And it's always app.conf for example: 'C:\Program Files\Splunk\etc\apps\search\default\app.conf' Is there a way to fix this? Or maybe I'm doing something wrong? Am I wrong to expect that 'local_only' mode never touches the 'default' directory on the target host?  
Hi.   We have been forced to add identifiers to the collection tier in our Splunk environment. The way we have solved it, is using _meta with a couple of fields. Now we have some DB data that ... See more...
Hi.   We have been forced to add identifiers to the collection tier in our Splunk environment. The way we have solved it, is using _meta with a couple of fields. Now we have some DB data that is getting indexed and we would like to tag these data the same way. The easy part was editing the /local/inputs.conf file and adding the extra line to all the input stanzas - this unfortunately didn't work and as far as I can read the db_inputs.conf doesn't allow the _meta line in the stanzas.   Does anyone have an idea, how to solve this problem, my thougths run in the direction of index-eval, but that is a more complex setup.   Kind regards las
Hi Team,   I need to extract the complete User list and their associated roles and groups in AppDynamics. Looks like there is no straight approach within AppD. Could you let me know any other optio... See more...
Hi Team,   I need to extract the complete User list and their associated roles and groups in AppDynamics. Looks like there is no straight approach within AppD. Could you let me know any other option to get this list.    With API , I am able to extract only the user list. But I need it with the roles assigned to them. 
This post is old but if you're coming for the same issue, it often happens when your NFT mount points to invalid path. Execute `nfsiostat <mount-point>` and confirm that each mount point returns a v... See more...
This post is old but if you're coming for the same issue, it often happens when your NFT mount points to invalid path. Execute `nfsiostat <mount-point>` and confirm that each mount point returns a value. If you receive no output or encounter an error, please verify that your mount points are specified correctly. If this helps you, please give us a thumbs up so others will be encouraged to check the same. Regards Anderson.
Thanks this works for me, I already check and tested the result. Just an additional question regarding the 2nd stats command on my post, i also want to count all the result and if the totalAcc... See more...
Thanks this works for me, I already check and tested the result. Just an additional question regarding the 2nd stats command on my post, i also want to count all the result and if the totalAccount >= 10 it should trigger the alert. Should I continue using 2nd stats command or should I use subsearch or join? Here's the 2nd stats command query: | stats count dc(login_account) as "UniqueAccount" values(login_account) as "Login_Account" values(host) as "HostName" values(Workstation_Name) as Source_Computer values(src_ip) as SourceIP by EventCode | where UniqueAccount >= 10
Hi @mint_choco  I believe you can either use filldown or fillnull here.  After your chart command you could use the following, note that you *might not* have to specify all of the fields here, but ... See more...
Hi @mint_choco  I believe you can either use filldown or fillnull here.  After your chart command you could use the following, note that you *might not* have to specify all of the fields here, but if you do not specify them then it will only fillnull fields which already exist, therefore if no values for index4 are found then index4 would not appear at all in your chart. | fillnull index1 index2 index3 index4 value=0  If you prefer to use the previous value in its place then use filldown: | filldown  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @uagraw01  In order to bypass the SAML auth, you need to navigate to the following URL, replacing the fqdn/port with your deployment info: https://fqdn:splunkport/en-US/account/login?loginType=s... See more...
Hi @uagraw01  In order to bypass the SAML auth, you need to navigate to the following URL, replacing the fqdn/port with your deployment info: https://fqdn:splunkport/en-US/account/login?loginType=splunk This will provide the standard Splunk login form. For more info also check out the following knowledge base article: https://splunk.my.site.com/customer/s/article/How-to-login-into-Splunk-using-local-Splunk-accounts-after-configuring-SAML and further info on the SAML docs page at https://docs.splunk.com/Documentation/Splunk/latest/Security/ConfigureSSOinSplunkWeb#:~:text=To%20access%20the%20login%20page%20after%20you%20enable%20SAML%2C%20append%20the%20full%20login%20URL    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunkers!!, We have recently configured SSO in Splunk using Keycloak, and it's working fine — users are able to log in through the Keycloak identity provider. Now, we have a new requirement whe... See more...
Hi Splunkers!!, We have recently configured SSO in Splunk using Keycloak, and it's working fine — users are able to log in through the Keycloak identity provider. Now, we have a new requirement where some users should be able to bypass SSO and use the traditional Splunk login (username/password) instead. Current Setup: Splunk SSO is configured via Keycloak (SAML). All users are redirected to Keycloak for authentication. We now want to allow dual login options: Primary: SSO via Keycloak (default for most users). Secondary: Traditional login for selected users (e.g., admins, service accounts). Objective: Allow both SSO and non-SSO (Splunk local authentication) login methods to coexist. Below is our setting for SSO. [authentication] authSettings = saml authType = SAML [roleMap_SAML] commissioning_engineer = integration hlc_support_engineer = integration [saml] caCertFile = D:\Splunk\etc\auth\cacert.pem clientCert = D:\Splunk\etc\auth\server.pem entityId = splunk fqdn = https://splunk.kigen-iht-001.cnaw.k8s.kigen.com idpCertExpirationCheckInterval = 86400s idpCertExpirationWarningDays = 90 idpCertPath = idpCert.pem idpSLOUrl = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production/protocol/saml idpSSOUrl = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production/protocol/saml inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production lockRoleToFullDN = true redirectPort = 443 replicateCertificates = true scimEnabled = false signAuthnRequest = true signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST sslPassword = $7$CCkQUt0tA8sZJMmU+8kigen0zdv/mxXjJsLRbmuBkEnMfhQ== ssoBinding = HTTP-POST [userToRoleMap_SAML] kg-user = commiss_engineer;hlc_support_engineer::::
If I remember correctly, the alert created "normally" with a "create notable" action should work as well (although it will have less configuration options). But in order to be able to create notables... See more...
If I remember correctly, the alert created "normally" with a "create notable" action should work as well (although it will have less configuration options). But in order to be able to create notables the user who the alert is run with must have proper privileges within the ES app. So if an alert was created with a normal alert creation means instead of ES Content Management, the user might not be able to create notable due to insufficient permissions. (yes, I'm aware that it sounds a bit convoluted).
The CLONE_SOURCETYPE option in a transform causes Splunk to create a copy (at this moment of the ingestion pipeline, so all the state of the event at this point is retained) of the processed event, c... See more...
The CLONE_SOURCETYPE option in a transform causes Splunk to create a copy (at this moment of the ingestion pipeline, so all the state of the event at this point is retained) of the processed event, changes its sourcetype to the one specified in the CLONE_SOURCETYPE option and reingests the event back at the (almost) beginning of the pipeline (skipping the initial phases of line breaking and time recognition). See the usual https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 CLONE_SOURCETYPE is called from a transform in the typing phase. The original event is processed without changes as if the transform containing the CLONE_SOURCETYPE option wasn't there. But the copy is moved back to the typing queue and starts the whole typing phase with a new sourcetype and triggers completely new set of transforms according to the new sourcetype (and possibly new source and host if they were overwritten during the initial transforms run).  
OK now I get it, so I need to define one more stanza in my transforms to overwrite the _TCP_ROUTING For example [mydevice_overwrite_tcprouting] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = an_e... See more...
OK now I get it, so I need to define one more stanza in my transforms to overwrite the _TCP_ROUTING For example [mydevice_overwrite_tcprouting] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = an_empty_output Props.conf [my_device_clone] TRANSFORMS-deletetcprouting = mydevice_overwrite_tcprouting   Something like this ? is it possible to define an empty values in an outputs.conf ? Thanks Nicolas      
Hi @Nicolas2203  With the anonymized sourcetype you are overwriting the original _TCP_ROUTING with output_externalhf: DEST_KEY = _TCP_ROUTING FORMAT = output_externalhf  However with mydevice:clon... See more...
Hi @Nicolas2203  With the anonymized sourcetype you are overwriting the original _TCP_ROUTING with output_externalhf: DEST_KEY = _TCP_ROUTING FORMAT = output_externalhf  However with mydevice:clone you are *not* overwriting the existing _TCP_ROUTING, instead you are also adding _SYSLOG_ROUTING but this does not overwrite the _TCP_ROUTING. You will need to apply a transform to mydevice:clone to set _TCP_ROUTING to a blank value to prevent it using the original local_indexers output.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ok so if I understand, when I clone a sourcetype, he will clone it's destination too ? Not sure to understand, I have some other log sources that I clone, and forward to a secondary splunk with th... See more...
Ok so if I understand, when I clone a sourcetype, he will clone it's destination too ? Not sure to understand, I have some other log sources that I clone, and forward to a secondary splunk with the same clone methods. Cloning the sourcetype to sourcetype:anonymized In the transforms I applied on the cloned sourcetype some regex for anonymization And this sourcetype is routed via _TCP_ROUTING to an output that is a heavy forwarder that route to the secondary splunk. For example  Transforms.conf [firewall_log-clone] CLONE_SOURCETYPE = firewall_log:clone REGEX = .* DEST_KEY = _TCP_ROUTING FORMAT = output_externalhf Props.conf [firewall_log] TRANSFORMS-clone = firewall_log-clone This is working, logs are properly sent to an HF that will forward those logs to a secondary splunk But it's not syslog log source so maybe this is the difference ? Thanks for the help
Hi @yuanliu ,  I think I was pretty clear).  I need the same functionality like in Studio  -  "Select matched" . But, anyway thank you for your efforts.
have you solved the issue?
Like I said above, there are a million ways to do this.  But you have to decide the exact behavior.  In the demo dashboard I posted, I used preselect.  You can edit the input to select these 4 as def... See more...
Like I said above, there are a million ways to do this.  But you have to decide the exact behavior.  In the demo dashboard I posted, I used preselect.  You can edit the input to select these 4 as default selection. An alternative behavior could be a special selection that has label "all 4" and the four values as value.  Implementation details will depend on how you use the token and so on.  There are other alternatives.  You need to be clear in describing how you want the UI to behave.
As @bowesmana diagnoses, default field extraction stops at 50K.  You can change this in limits.conf.  The stanza is [kv], property name is maxchars. I recommend that you fix another problem @livehyb... See more...
As @bowesmana diagnoses, default field extraction stops at 50K.  You can change this in limits.conf.  The stanza is [kv], property name is maxchars. I recommend that you fix another problem @livehybrid hinted at: You should extract id field from message field, not from _raw, i.e., | rex field=message "(SENDER|RECEIVER)\[(?<id>\d+)\]"